Report: 85% of Organizations Are Leveraging AI

Eighty-five percent of organizations today are utilizing some form of AI, according to the latest State of AI in the Cloud 2025 report from Wiz. While AI's role in innovation and disruption continues to expand, security vulnerabilities and governance challenges remain pressing concerns.

One of the most striking findings of the report is the meteoric rise of DeepSeek. The DeepSeek-R1 model has gained significant traction, accumulating 130,000 downloads on the AI platform Hugging Face.

Currently, 7% of organizations using self-hosted AI models are running DeepSeek — a twofold increase within January 2025 alone. However, this surge has been accompanied by security concerns, particularly after researchers uncovered an exposed DeepSeek database leaking sensitive data. The findings reinforce the need for stringent AI security and oversight.

Despite growing competition, OpenAI continues to dominate the AI landscape. The report notes that 75% of organizations now use self-hosted AI models, while 77% utilize dedicated AI/ML software. OpenAI and Microsoft's Azure OpenAI SDKs remain the most widely used, running in 67% of cloud environments.

This widespread adoption underscores OpenAI's stronghold in enterprise AI solutions, the report's authors claim, even as new players challenge its position.

The AI ecosystem remains a blend of open source and closed source solutions. The report found that eight of the top 10 hosted AI technologies are associated with open source models. This trend suggests that enterprises are increasingly integrating both public and proprietary AI tools to build flexible, scalable solutions.

Self-hosted AI models are seeing rapid adoption, with BERT's usage skyrocketing from 49% to 74% year-over-year. Meanwhile, new entrants such as Mistral AI and Alibaba Cloud's Qwen2 have gained traction, signaling increased diversity in the AI marketplace.

Although AI continues to unlock new opportunities for creativity and efficiency, its rapid deployment poses challenges around security, governance, and cost management. The report warns that many AI tools are being integrated without clear industry standards, raising concerns over risk visibility and responsible AI usage.

Security teams and developers must collaborate to mitigate risks, including data exposure and unauthorized AI usage within cloud environments, the report concluded.

Visit the Wiz site here for the full report.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • Teacher Holding Tablet Computer Explains Lesson to Young Children

    How to Streamline Procurement and Reduce Compliance Headaches

    Learn key areas to simplify K-12 purchasing and help teams manage limited budgets more effectively.

  • Red alert symbols and email icons floating in a dark digital space

    Report: Cyber Attackers Are Fully Embracing AI

    According to Google Cloud's 2026 Cybersecurity Forecast, AI will become standard for both cyber attackers and defenders, with threats expanding to virtualization systems, blockchain networks, and nation-state operations.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    New AI Detector Identifies AI-Generated Multimedia Content

    Amazon Web Services and DeepBrain AI have launched AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.

  • cybersecurity book with a shield and padlock

    Proposed NIST Cybersecurity Guidelines Aim to Safeguard AI Systems

    The National Institute of Standards and Technology has announced plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.