Report: AI Adoption Forces Trade-Off Between Speed and Identity Security

AI adoption is forcing organizations to trade security for speed — and identity controls are the first casualty, according to a new report from Delinea, a provider of identity security solutions for both human and AI agent identities.

A key finding in the 2026 Identity Security Report says 90% of organizations are forcing their security teams to loosen identity controls for AI.

In simpler terms, organizations are prioritizing speed over security in deploying AI tools, with leadership focused on faster adoption to drive productivity gains.

The major problem is that it leaves organizations heavily exposed to security vulnerabilities. Enterprises are fast-tracking AI initiatives, despite significant gaps in AI identity discovery, monitoring, and privilege control.

"The pressure to move fast on AI is real, but identity governance has not kept pace, which exposes enterprises to significant risk," said Delinia CEO Art Gilliland.

Over 2,000 IT decision-makers actively using or piloting AI were surveyed by Delinea. According to the report, 90% of respondents had at least one identity visibility gap, with the largest gap tied to machine and non-human identities (NHIs), including accounts used by AI agents.

"As AI agents multiply across enterprise environments, these identities often have the least oversight," Gilliland said. "The organizations that will succeed in the AI era will be the ones that enforce real-time, contextual access across every human, machine, and agentic AI identity."

Other findings from the report include:

  • AI expansion is driving non-human identity risk: 42% of organizations said AI expansion has been one of the top factors increasing NHI risk in the past 12 months, far surpassing increased automation and CI/CD velocity (26%) and growth in cloud-native workloads (26%).
  • Limited visibility into privileged AI actions: 80% of organizations said they are unable always to understand why an NHI performed a privileged action, highlighting major challenges with traceability and accountability for automated identities.
  • Standing access remains the norm: 59% of organizations reported lacking viable alternatives to standing privileged access for NHIs and AI agents, increasing the risk that automated identities retain persistent permissions that could be exploited.

The result of all this is that traditional identity protections haven't kept up with AI and loosening identity controls has provided bad actors with an exponentially larger attack surface.

The report concluded that AI will continue to break traditional security models as companies allow their security controls to grow lax and more identities and access points appear.

"Clearly, organizations can't afford to slow down AI adoption," Delinea said. "But the study indicates that identity security must evolve alongside AI adoption."

For the full report, visit the Delinea site.

Featured

  • tool icons with variety of business icons

    SETDA Releases Free EdTech Quality Action Toolkit

    The State Educational Technology Directors Association (SETDA) has put together a free K-12 EdTech Quality Action Toolkit that provides a framework for evaluating education technology products as well as guidance on regulatory compliance, templates for communicating with vendors, training resources, and more.

  • abstract AI technology with glowing digital interfaces

    Snowflake Expands AI Stack With $200M OpenAI Partnership

    Snowflake and OpenAI have announced a multi-year, $200 million partnership that will make OpenAI models available on Snowflake's platform.

  • Cyber threat vectors illuminate global map

    Attackers Exploit Claude Code Tool to Infiltrate Global Targets

    San Francisco-based AI developer Anthropic recently reported that attackers linked to China leveraged its Claude Code AI to carry out intrusions against about 30 global organizations.

  • AI logo near computer equipment

    White House Issues National Policy Framework for AI

    The White House has released a four-page AI policy framework aimed at setting a national approach to AI, with priorities including child safety, intellectual property protections, truth and accuracy guardrails, and worker training for an AI-driven economy.