80% of Enterprises will Deploy AI Apps or APIs in Production Environments by 2026

In 2023, just 5% of enterprises had deployed generative AI apps or used generative APIs in production environments. By 2026, that will skyrocket to 80%, according to a new forecast from market research firm Gartner.

"Generative AI has become a top priority for the C-suite and has sparked tremendous innovation in new tools beyond foundation models," said Arun Chandrasekaran, distinguished VP Analyst at Gartner, in a prepared statement. "Demand is increasing for generative AI in many industries, such as healthcare, life sciences, legal, financial services and the public sector."

The firm identified three technologies expected to have the largest impact on all organizations:

  • Generative AI-enabled applications;

  • Foundation models; and

  • AI trust, risk and security management (AI TriSM).

Generative AI-enabled applications are familiar to end users. They're used widely for "task augmentation" and for user experience. They have weaknesses like inaccuracies in output and "hallucinations," Gartner noted.

Foundation models, as Chandrasekaran noted, "are an important step forward for AI due to their massive pretraining and wide use-case applicability. Foundation models will advance digital transformation within the enterprise by improving workforce productivity, automating and enhancing customer experience, and enabling cost-effective creation of new products and services." Gartner predicted that foundation models would account for 60% of natural language processing use by 2027.

AI Trust, Risk and Security Management (AI TriSM) is "an important framework for delivering responsible AI and is expected to reach mainstream adoption within two to five years. By 2026, organizations that operationalize AI transparency, trust and security will see their AI models achieve a 50% improvement in terms of adoption, business goals and user acceptance," according to Gartner. "AI TRiSM ensures AI model governance, trustworthiness, fairness, reliability, robustness, efficacy and data protection. AI TRiSM includes solutions and techniques for model interpretability and explainability, data and content anomaly detection, AI data protection, model operations, and adversarial attack resistance."

In a separate report, Gartner noted that CISOs in organizations that "operationalize" AI "need to champion AI TriSM" to mitigate risks and improve outcomes.

"CISOs can’t let AI control their organization. AI requires new forms of trust, risk, and security management that conventional controls don’t provide," said Mark Horvath, VP analyst at Gartner, speaking at a conference in late September. "Chief information security officers need to champion AI TRiSM to improve AI results, by, for example, increasing the speed of AI model-to-production, enabling better governance or rationalizing AI model portfolio, which can eliminate up to 80% of faulty and illegitimate information."

Said Chandrasekaran: "Organizations that do not consistently manage AI risks are exponentially inclined to experience adverse outcomes, such as project failures and breaches. Inaccurate, unethical, or unintended AI outcomes, process errors, and interference from malicious actors can result in security failures, financial and reputational loss or liability, and social harm."

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • closeup of laptop and smartphone calendars

    2024 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

    The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.