Agentic AI Is Live. Enterprise Security Controls Are Not.

Agentic AI Is Live. Enterprise Security Controls Are Not.

Thirty-seven percent of organizations now have AI agents deployed or in active testing. Only 3% have broad production deployment of agent-specific security controls, and 20% have none at all. That gap, between where deployment has gone and where governance has followed, is the central finding of ETR's 2026 State of Security study, drawn from 517 technology leaders across enterprise organizations. AI agents are software systems that take autonomous actions across tools, workflows, and data sources without requiring step-by-step human direction. They are no longer experimental.

The data traces a clear arc: conviction is high, budget has moved, deployment is accelerating, and the security infrastructure designed to govern it has not kept pace.

 

 

Key Takeaways

  • LLM and GenAI protection is the top security budget priority in 2026, with 59% of organizations planning increases, up from 50% in 2025

  • 37% of organizations have AI agents deployed or in active testing, up from 27% in 2025

  • Only 3% have broad production deployment of agent-specific security controls; 20% have none at all

  • Preventing sensitive data from entering AI prompts is the hardest data security problem, cited by 36% of respondents, twice the rate of the next concern

  • No governance model has emerged as the clear standard; three approaches sit within three points of each other

 

What Is Driving Security Budget Growth in 2026?

For the first time, LLM and generative AI protection dethroned cloud security as the top planned budget growth area in enterprise cybersecurity. In this year's study, 59% of organizations planned budget increases for LLM and GenAI protection, up from 50% in 2025, surpassing cloud security, which fell from 58% to 54%.

The evaluation data reinforces the shift. Security for generative AI debuted in the study at 75% in 2025 and jumped to 83% in 2026, the highest evaluation rate of any category tracked. LLM security is the second-fastest riser, climbing from 61% in 2024 to 76% in 2026, a 15-point gain in two years. When technology leaders are evaluating something at that rate, spending follows.

Spending already has. Fifty-four percent of organizations are currently spending on AI-related security tools or plan to within the next six months, up from 43% in 2025. The cohort reporting no plans dropped from 23% to 16%. This is not a market still debating whether to invest. The conversation has moved to how.

Actual tool penetration tells a more measured story, however. Thirty-nine percent of organizations still run AI in fewer than 10% of their security stack. The 10-25% tier is the fastest-growing segment, which signals that broad deployment is underway but has not yet arrived.

 

Spend Intent

Agentic AI Deployment Is Accelerating

On priority rankings, identity security holds the top position with a score of 68, but AI-focused security recorded the single largest year-over-year gain of any category, jumping from 49 to 58 and narrowing the gap with second-place data security to just two points.

Deployment data matches the priority scores. In 2026, 37% of organizations report AI agents deployed or in active testing, up from 27% in 2025. That breaks down as 4% fully implemented, 19% partially deployed, and 14% currently in testing. Another 18% plan to deploy within 12 months, and the "considering but no timeline" group fell from 34% to 27%.

Conviction among security leaders is equally clear. Sixty-eight percent rate AI agents four or five out of five for importance to cybersecurity's future, up from 62% in 2025. Less than 1% rated them a one. The market has committed.

 

Security Controls Have Not Kept Pace

Despite the deployment momentum, only 3% of organizations report broad production deployment of agent-specific security controls. Twenty percent report no agent-specific controls at all. The majority sits in pilot territory: 29% are running pilots with limited human oversight and manual review, and another 24% are running pilots with enforced technical guardrails.

The panel discussion that accompanied the survey findings made this dynamic explicit. "The hype is so much right now that we're just rushing to implement, rushing to put something in, rushing to deliver some sort of proof of value," said one panelist, a CIO and CISO at a large educational institution. "Security is a little bit lagging."

Governance is equally unsettled. Three approaches sit within three points of each other: a single centralized control plane (26%), case-by-case risk tiering (25%), and identity-centric governance using identity security tools (23%). Only 7% rely entirely on native controls from each agent platform. No single approach commands a clear majority. Organizations are largely building governance models in parallel with deployment, without consensus on which model works.

 

What Are the Biggest Identity and Data Risks of Agentic AI?

When respondents identified the top identity-related risks of agentic AI, two concerns dominated. Fifty-seven percent flagged agents acting outside their intended context or policy, and 56% flagged agents being over-privileged. Privilege escalation through chained tool calls and lack of non-repudiation, meaning the inability to prove what an agent did and under whose authority, followed at 37% and 36%, respectively.

The same panelist framed it directly: "When we think about agentic, we really have to also think about what autonomous activity they're doing, what data they're manipulating, what workflows they're triggering, and how they're making decisions. They have their own identity. When we think about securing these agents, we need to think about our identity governance, privilege management, and ensuring that they are getting the same controls that we would put with our own staff members."

On the deployment side, the same themes surface. Fifty-seven percent cite lack of visibility into what AI agents accessed, and 56% cite controlling non-human identity (NHI) privileges, meaning the credentials and access rights assigned to AI agents and automated systems, as the hardest operational problems. Three of the top four concerns are upstream access-control failures. Technology leaders frequently do not know what their agents are doing or as whom. Approval workflows for high-impact actions ranked last among deployment challenges at 25%, suggesting that organizations are not yet at the stage of governing what agents do; they are still trying to see it.

Data security tells a parallel story. Preventing sensitive data from entering prompts or model memory is the hardest data security problem by a wide margin, cited by 36% of respondents. That is exactly twice the 18% who named the next highest concern. Among the layers technology leaders consider most critical to their AI and LLM security strategy, data security ranked first at 50%, followed by identity and access management at 48%, and AI platform security at 42%.

One finding from the data deserves particular attention: shadow AI leads overall data damage risk at 31%, but sanctioned AI leakage concerns, including poor filtering, prompt injection, and over-broad retrieval from internal knowledge bases, total 41% combined. The tools organizations approved and deployed account for nearly as much risk as ungoverned ones. The gap between unsanctioned and sanctioned risk is smaller than the headline numbers suggest.

 

SOS26_Biggest Challenges

The Takeaway

The State of Security data draws a clear line between where organizations have committed, specifically spending, evaluation, and deployment, and where infrastructure has not followed, specifically controls, governance, and data protection. That gap is not a temporary growing pain. It is the defining risk of this moment.

Security leaders who close it fastest will not be those who slow down on deployment. They will be those who build governance and identity frameworks now, before scale makes the problem far harder to reverse.

The study captures how this is playing out across more than 500 security-specific technology leaders, 80% of whom hold C-Suite or Director-level roles. The findings summary is free to download and covers the full picture across budget shifts, vendor strategy, agentic AI adoption, and data security. For teams that want to go deeper, ETR offers private briefings to walk through the full dataset and what the shifts mean for your 2026 security roadmap.

Download the State of Security Findings Summary or book a session with an ETR team member to go beyond the summary.

Straight from Technology Leaders

We eliminate bias and increase speed-to-market by cutting out the middleman and going straight to the voice of the customer