In 2026, LLM and generative AI (GenAI) protection surpassed cloud security as the top enterprise security budget priority for the first time. According to ETR's 2026 State of Security Study, 59% of security and technology leaders plan to increase GenAI security spending, compared with 54% for cloud security and 45% for detection and response. Everything else falls below 40%.
Cloud security remains one of the most critical categories in enterprise security budgets. But 2026 cloud security spending trends tell a clear story: GenAI security priorities are now driving incremental budget in ways that traditional categories are not.
The State of Security is ETR's annual study tracking budget priorities, spending intentions, and technology adoption across the enterprise security market. The 2026 edition, fielded in March across 517 respondents, covers category-level budget intent, hyperscaler strategy, and AI governance. The findings point to a market mid-shift.
That shift signals more than a new line item. It reflects a broader change in how enterprises define security risk as AI adoption accelerates. During the State of Security feedback panel, one security leader described AI as bringing down the cost of entry for attackers, making sophisticated attacks cheaper, faster, and more accessible. Another described the defensive response more directly: organizations now have to fight AI with AI, responding in minutes rather than hours or days. Together, those observations help explain why AI security has moved from emerging concern to budget priority so quickly.
LLM and GenAI protection is now the top security budget priority for 2026 at 59%, overtaking cloud security (54%) for the first time
Cloud security still ranks second, but its budget growth intent has declined year over year, from 58% in 2025 to 54% in 2026
Zero trust is the second-fastest rising category, climbing from 35% to 39%
Security features are beginning to influence hyperscaler AI workload decisions, but the market is split: 31% say it will not affect their approach at all
Visibility, identity, and agent governance capabilities rank higher than data protection when evaluating hyperscalers for AI and agentic workloads
Native DSPM ranks last (23%) among all evaluated hyperscaler capabilities for AI workloads
Cultural adaptation, not technology gaps, may be the biggest long-term risk in enterprise AI security
On a longitudinal basis, LLM and GenAI protection rose from 50% in 2025 to 59% in 2026, overtaking cloud security, which declined from 58% to 54%. Zero trust tools were the second-fastest riser, climbing from 35% to 39%. Detection and response slipped slightly, from 47% to 45%.
The pattern is hard to miss. Security leaders are not walking away from cloud security, but they are increasingly carving budget toward AI-specific risks that feel more immediate, less mature, and harder to manage with traditional approaches.
At the bottom of the rankings, risk-based exposure management and cyber liability insurance are the most static categories, with 66% and 63% of respondents expecting no change. Cyber liability insurance is the only category with meaningful decrease intent at 4%, suggesting some organizations are finally finding relief after years of rate pressure.
Security features are beginning to influence where organizations place AI workloads, though the picture has not yet settled into a clear standard.
The plurality of security-conscious buyers, 27%, plan to stay multi-cloud while shifting incremental AI workloads toward providers with stronger native controls. Another 12% would consolidate with whichever hyperscaler offers the strongest built-in protections. An additional 11% will prioritize portability and third-party security, treating native controls as secondary rather than decisive.
At the same time, 31% say security features will not materially affect workload placement at all, and 19% still have no defined position.
That fragmentation matters. It suggests the market has not yet settled on a standard model for secure AI deployment in the cloud. Some enterprises are leaning into native hyperscaler controls. Others prefer portability, separation, and independent layers of security. And many are still deciding what secure AI workload placement should look like. In practice, cloud strategy is becoming more entangled with AI governance, identity, and control frameworks than with infrastructure security alone.
When respondents were asked which hyperscaler capabilities matter most for AI and agentic workloads, the results divided cleanly into two tiers.
The top four capabilities, compliance and audit reporting (48%), strong identity integration for non-human identities (44%), unified policy and enforcement (40%), and built-in agent controls (39%), cluster within nine percentage points of each other. All four concern visibility, identity, and governance over agent behavior.
The bottom three sit more than 10 points lower: usage controls or kill switches for AI agents (29%), model and data isolation (28%), and native data security posture management (DSPM) with sensitive data controls (23%). DSPM ranks last, both in this question and in the broader data security findings.
Panelists at the State of Security feedback panel described governance as the practical issue that rises once organizations move beyond experimentation. One described creating a dedicated AI governance process, separate from standard risk assessment, to determine what data an AI system can access, what permissions it needs, and what guardrails must be in place before deployment.
But the data layer beneath those agents remains a different story. One panelist identified the root of the problem directly: "Without being able to classify your data, you don't even know how to enforce your controls. You don't know how to prioritize what is important, what is not. Basically, this is something which has to be present in order to be able to achieve data security in AI."
That observation lines up with what the study shows. Organizations are building out agent governance infrastructure before data protection infrastructure. The capability that would prevent sensitive data from being exposed in the first place ranks last across the entire category.
Near the end of the panel discussion, participants were asked whether the biggest long-term failure point would come from technology gaps, governance gaps, or cultural gaps. The strongest answer was culture.
The argument: technology problems will eventually be addressed by better tools, stronger controls, and more mature governance. The harder issue is the human response to AI, including fear, unclear ownership, and the tension between excitement and apprehension in organizations being asked to adopt systems no one fully understands yet.
That perspective adds an important layer to the study's findings. AI security is not only a technical transition. It is an organizational one, and the enterprises that treat it as such will be better positioned to close the gaps the data is already surfacing.
For vendors, hyperscalers, and enterprise security teams, the 2026 data points in one direction. Winning in cloud security now requires more than protecting infrastructure. It increasingly means helping customers govern AI behavior, integrate identity across human and non-human actors, enforce policy consistently, and prove compliance in environments where autonomous systems are starting to take action. The cloud security conversation is no longer just about workloads and posture. It is about control in a world where workloads are increasingly intelligent.
Cloud security is still essential. But as the 2026 data makes clear, it is no longer the clearest proxy for where security budgets are heading next. That distinction now belongs to LLM and GenAI protection. The enterprises that adapt fastest will not be the ones that simply spend more. They will be the ones that build the governance, identity, and operational discipline to secure AI as it moves from pilot to production.
Want to see where every security category lands in the full data set? Start with the 2026 State of Security Findings Summary for a concise look at the data, then book a session with an ETR team member to explore the implications in more detail.
Data sourced from ETR's 2026 State of Security Study. N=517 respondents (N=260 for hyperscaler capabilities question). Survey conducted March 2026. Panelist quotes from ETR Insights 468: State of Security Feedback Panel.