What RSAC 2026 Made Clear: AI Risk Is No Longer Hypothetical

What RSAC 2026 Made Clear: AI Risk Is No Longer Hypothetical

ETR attended the RSA Conference (RSAC) 2026 in San Francisco, attending sessions and discussions with practitioners and industry leaders, working the expo floor, and connecting with technology leaders and vendors across the cybersecurity landscape. One thread became apparent across the event this year: the large and growing gap between safe implementation of AI systems (both agentic and generative) and the reality within enterprises. Below are five themes we observed at the conference, supported by findings from our latest State of Security report, which polled 517 security-specific technology leaders across a broad and deep range of IT security postures, practices, and intentions.

 

Key Takeaways

• Agentic AI is in production but lacks needed security controls. In our 2026 annual State of Security study, 37% of organizations have AI agents deployed or in active testing, up from 27% last year, versus only 20% with none at all.

• Non-human identities (NHIs) re-emerge as a defining challenge. Agent deployment is bloating NHIs to unseen heights, while identity security ranks as the top overall priority in the State of Security survey, in which NHI evaluation rates rose to 70%, reflecting the urgency of governing agent credentials at scale.

• AI security spending crosses a tipping point. LLM and GenAI protection dethroned cloud security as the top budget growth area for the first time, climbing from 50% to 59% year over year. Here, 54% of organizations are already spending or plan to within six months.

• Zero trust practices deepen while legacy tool confidence erodes. Mid-tier zero trust implementation gained seven points year-over-year (y/y), but confidence in established controls is declining broadly, with seven of 10 named security strategies in the State of Security losing ground while lacking consensus on a single agentic governance model.

• Adversaries are trying to weaponize faster than defenders adapt. Shadow AI is cited as the top data damage risk at 31%, but sanctioned AI-usage risk options total a combined 41%. On the data side, preventing sensitive information from entering prompts is the hardest DataSec problem by a wide margin.

 

Agentic AI Rewrites the Threat Model

The most notable shift at this year's RSAC was arguably the framing of AI agents as a fundamentally different security challenge, both in scale and complexity, than prior IT tools or human users. Distinguishing within AI, where LLMs generate text in response to prompts, agentic systems interpret goals, chain tools together, and take autonomous actions on live enterprise data. They may approve purchase orders, execute financial transactions, modify production code, and grant system access on the security and SOC side. Industry and market leaders drove home that a single vulnerability in an agentic workflow does not only expose data, but also triggers actions on a much larger attack surface.

The State of Security report shows that this shift is already underway in the field, as 37% of organizations now have AI agents deployed or in active testing, up 10 points from 27% in 2025, while those considering but without a timeline fell from 34% to 27%. The conviction behind this is also clear, as 68% of security leaders rate AI agents a 4 or 5 out of 5 for importance to cybersecurity's future, up from 62% last year, while less than 1% rate them a 1. However, the security apparatus has not kept pace, with only 3% of organizations reporting broad deployment of agent-specific security controls versus 20% claiming no controls at all, during which a slight majority (53%) remain in a pilot phase. This gap between agent deployment velocity and security control maturity was reflected across several questions in ETR's 2026 State of Security study.

 

Non-Human Identity Is the New Perimeter?

Over and over, security experts argued that expanded challenges from NHIs in an agentic world pose an exponential degree of greater potential harm. Estimates on agents in production or use are growing faster than any security team at a large enterprise can inventory. Presenters drew a sharp parallel to the early days of cloud adoption, when machine identity explosion, secrets sprawl, shadow IT, and governance gaps caught enterprises off guard, but while cloud identity debt accumulated over years, AI is compressing that same arc into months. The recurring call to action was clear, with a broad need to inventory agent identities, define lifecycle controls for provisioning and revocation, and remodel AI governance as a continuous program rather than a one-time project.

The State of Security report backs up this urgency. Identity security scored the highest of any category by a meaningful margin in areas of priority, concurrently seeing NHI management evaluation rates climb to 70%. Additionally, when asked about identity-related risks of agentic AI, 57% of respondents flagged agents acting outside their intended context and 56% flagged agents being over-privileged, leading the top concerns. Notably, approval workflows for high-impact actions ranked last at just 25%, as one control to stop unauthorized actions before they happen.

 

SOS-Trending AreasSecurity for generative AI and LLM security lead as the top areas of planned or completed evaluation for 2026, followed by cloud security which saw no change on a year-over-year basis, while data security citations grew by eight percentage points y/y.

 

The AI Supply Chain Is Wide Open

Supply chain risk, though far from new to cybersecurity professionals, also took on a particular urgency at this year's Conference. Moving well beyond traditional software dependencies and potential zero-day threats, AI usage greatly expands the risk to the enterprise stack. This includes opaque training data with hidden ownership or unknown provenance, third-party models running on risky or outdated safety checks, vulnerable agent orchestration frameworks, and artifacts that attackers can study and reuse more effectively. One session showed a live demonstration of a malicious MCP server package harvesting email addresses, illustrating how speed and trust structures in the AI toolchain create exploitable gaps in real time. The AI supply chain is fast-moving and largely ungoverned, creating real exposure for enterprises deploying agents at scale.

It is hardly a surprise that this year's State of Security report reflects a shift in priorities for security budgets, with LLM and GenAI protection dethroning cloud security as the number-one growth area for the first time. Here, AI protection rose from 50% to 59% y/y, during which cloud security fell from 58% to 54%. Security for generative AI now carries the highest planned evaluation rate of any trending category at 83%, up from 75% in 2025, and LLM security is the second-fastest riser, climbing from 61% to 76% over two years. On the spending side, 54% of organizations are already spending on AI security tools or plan to within six months, up 10 points from 43% last year, while the proportion with no plans fell from 23% in 2025 to just 16% this year. However, spending remains immature here, with 39% of respondents claiming AI in less than 10% of their entire security stack.

 

Zero Trust Gets an AI Makeover

Zero trust was a dominant framework at RSAC this year, but the conversation has evolved. Multiple sessions introduced agentic trust frameworks that adapt zero trust principles specifically for AI workloads. The core questions now sound different: Who is this agent? What is it doing? What data is it consuming and serving? Where can it go? What happens if it goes rogue? Practitioners presented maturity models that progress from ad-hoc to guardrailed to full zero-trust agentic, including cryptographic identity, sandboxed execution, and continuous red teaming.

The State of Security report shows deepening zero trust adoption, with the share of organizations with no zero trust tools dropping from 12% to 8%, and the mid-tier implementation buckets (26-50% and 51-75%) gaining seven points combined year over year, while the top of the distribution barely moved. Zero trust tools were also the second-fastest riser in budget intent, climbing from 35% to 39%. But confidence in established security controls is eroding. Of 10 named security strategies, seven declined year over year. Employee training fell the most, dropping 10 points to 62%, while IAM, PAM, DLP, and role-based access controls all shed points simultaneously. AI-based behavioral anomaly detection was the sole meaningful gainer, rising from 20% to 25%, the only AI-native control on the list. Meanwhile, the industry has yet to settle on a governance model for agents: the three leading approaches (centralized control plane at 26%, case-by-case risk tiering at 25%, and identity-centric governance at 23%) sit within three points of each other.

 

SOS-Biggest ConcernsActing outside of intended context or policy edges out over-privileging as the biggest concern around identity-related risk in the State of Security Study, at 57% and 56%, well ahead of privilege escalation, lack of non-repudiation, and token theft.

 

Adversaries Are Moving at Machine Speed

The offensive side of the AI equation received significant attention. Adversaries are adopting AI across multiple vectors, including social engineering, AI-assisted malware development (helping to accelerate code generation while lowering the barrier to entry), and agentic AI-orchestrated attacks capable of reconnaissance, exploitation, and machine-speed lateral movement. As one industry presenter framed it, the risk surface expands in lockstep with AI adoption. Rapid enterprise uptake, embedded AI in business workflows, model proliferation, and autonomous agents all contribute. Compounding these troubles, on the other side, defenders face lagging oversight, limited visibility, and hidden data flows. GenAI's rate of adoption is outpacing even the rate of cloud adoption in the 2010s by a wide margin, bringing along more dire security implications.

Shifting again to the State of Security report, when assessing the greatest threats to data security, 31% of respondents pointed to shadow AI usage outside sanctioned tools, the highest single response (by nine percentage points). However, sanctioned AI leakage concerns, including poor model output filtering (18%), prompt injection (13%), and over-broad retrieval from internal knowledge bases (10%), combined for 41% of total answers. On the defensive side, preventing sensitive data from entering AI prompts ranks as the hardest data security problem by a factor of two, cited by 36% of practitioners in this year's survey. Data discovery and classification, by contrast, sat last among responses at 9%.

 

SOS-Biggest ConcernsShadow AI usage outside of sanctioned tools leads all listed risk origins. In general, unsanctioned AI leads risk concerns over sanctioned AI usage, 53% to 41%.

 

The Bottom Line

Across all these topical trends and challenges on display last week, AI has clearly evolved from speculative concern to a live operational risk. The State of Security results and RSAC reinforced that securing agentic systems, governing non-human identities, and hardening the AI supply chain are not hypothetical or future problems, but rather urgent and ongoing enterprise challenges. ETR will continue tracking these security practices and vendor spending dynamics through surveys and our ETR Community.

 

Straight from Technology Leaders

We eliminate bias and increase speed-to-market by cutting out the middleman and going straight to the voice of the customer