The accountability question sounds simple: when an AI agent executes a high-stakes action, can you prove a specific human authorized it? Albert Biketi, Chief Product and Technology Officer at Yubico, posed it at RSAC 2026 last week. "The hard problem in agentic AI security is accountability: can you prove a specific human approved a high-consequence action?"
From most enterprise security teams, the honest answer is no.
That's what RSAC 2026 kept circling back to. Not theoretical AI threat modeling or five-year risk horizons. The specific, structural problem that AI agents create in identity infrastructure built entirely around humans. The conference ran March 23-26 at Moscone Center in San Francisco, with agentic AI, identity, and Model Context Protocol explicitly on its official list of top trends for 2026. That combination is not coincidental.
The gap between claimed visibility and actual control
Start with the baseline. An ArmorCode and Purple Book Community survey of over 650 security leaders found 90% of enterprises claim visibility into their AI footprint. That sounds reassuring until the next number: 59% of those same organizations confirmed or suspect they have shadow AI. Nearly two-thirds of people who said they could see everything also admitted they're not seeing everything.
Nudge Security's data adds texture. Their AI Agent Discovery product, announced at RSAC, found that 80% of organizations are encountering agentic AI risks related to improper data exposure and unauthorized system access. The product covers Microsoft Copilot Studio, Salesforce Agentforce, and n8n, and specifically surfaces unauthenticated MCP connections, orphaned agents, and risky integrations.
"The greatest AI security threat isn't what organizations can't see, it's what they can see but can't govern fast enough to stop," said Sangram Dash, CISO and VP of IT at Sisense.
That's a more precise framing than most vendor messaging at the conference. The problem is not purely discovery. The governance layer does not exist yet.
What the industry shipped
The vendor response at RSAC was substantial, though uneven in maturity.
On identity: RSA expanded passwordless capabilities for Microsoft 365 E7 to cover both human and AI agent identities. IBM, Auth0, and Yubico announced a Human-in-the-Loop authorization framework combining IBM WatsonX orchestration, Auth0's CIBA-standard identity flows, and Yubico's YubiKey hardware authentication for cryptographically verified human approval of high-stakes agent actions. Yubico and Delinea separately announced integration of hardware-attested Role Delegation Tokens with Delinea Platform and StrongDM ID — an identity layer built specifically for AI agents, in early access in Q2 2026.
Saviynt debuted what they're calling "Identity Security for AI." CEO Sachin Nayyar argued the full stack is required: "core identity management, posture management, privileged access management, vaulting, enforcement — everything running together at AI speed."
On detection and response: CrowdStrike announced general availability of Falcon AI Detection and Response (AIDR), plus AI agent discovery and shadow AI governance capabilities. CEO George Kurtz framed the endpoint as the key observation point: "The endpoint is really the manifestation of where AI takes place." Check Point unveiled an AI Defense Plane for governance, visibility, and runtime protection. Palo Alto Networks announced Prisma AIRS 3.0 with AI agent posture capabilities and previewed an "AI Agent Gateway." Google announced MCP remote server support for Google Security Operations, with general availability planned for early April 2026, and completed its acquisition of Wiz at the conference.
The attacker picture
The defensive response looks more urgent when you look at what adversaries are actually doing. M-Trends 2026 — Mandiant and Google's annual threat report based on over 500,000 hours of incident investigations — found that attacker handoff windows from initial access have collapsed to 22 seconds. That is the defender intervention window before adversaries establish persistence. It was hours in previous years.
Mandiant's AI Risk and Resilience report shows adversaries have moved from experimenting with AI to deploying adaptive tools and autonomous agents capable of rewriting their own code in real time. The question is whether enterprise defenses are being built at anywhere near comparable speed.
Zeus Kerravala, founder and principal analyst at ZK Research, put the structural problem plainly: "It's unlike anything the security industry's ever had to deal with before. How you manage identities and how you onboard access and how you delegate trust and governance, all that's going to change. Our attack surface has gone from something that was unmanageable to begin with to completely chaotic."
Why the identity framing matters
Most enterprise security architecture assumes a human at the authentication point. Certificates, passwords, hardware tokens, biometrics — all designed to bind an action to a person. When an AI agent acts autonomously across multiple systems, that binding breaks. The agent may be operating under a service account with permissions far broader than the specific task requires, with no audit trail that maps back to which human decision prompted which action.
The IBM-Auth0-Yubico framework attempts to solve this at the protocol level. CIBA (Client-Initiated Backchannel Authentication) lets a system trigger an out-of-band authentication request to a human before a high-consequence operation proceeds, with hardware attestation on the human side. Whether that adds enough friction to be practical in high-throughput agentic workflows is genuinely unclear. The Q2 2026 early access timeline for the Yubico-Delinea RDT integration puts production-ready tooling at least several months out.
WEF's Global Cybersecurity Outlook 2026 found 87% of organizations see rising risks from AI vulnerabilities. Omdia research shows 89% of CISOs want to accelerate adoption of agentic security tools. Those numbers reflect demand. The question is whether the tooling is mature enough to deliver actual governance rather than governance theater.
Greg Nelson, CEO of RSA, framed the core challenge: "The rise of AI agents in the enterprise means organizations need to rethink how they secure every identity — human and machine alike."
What practitioners need to track now
For teams deploying agents: the MCP surface is the most immediate risk. Unauthenticated MCP connections are the specific vector Nudge Security flagged, and Google's announcement of MCP server support in Google Security Operations (GA in early April) suggests the tooling ecosystem is catching up — but it is not there yet. Audit your MCP connection inventory before expanding agent deployments.
For security teams: service accounts are not a viable identity primitive for AI agents. Role Delegation Tokens with hardware attestation or CIBA-based workflows for human approval are closer to what is needed. Both are in early access, not production-hardened. Plan for a gap.
The ArmorCode and Purple Book Community survey found that a majority of respondents said AI-assisted development now outpaces security team review capacity — which means code with unreviewed agent permissions is already in production at significant scale across most enterprises. Organizations that confirmed or suspected AI-generated code vulnerabilities in production are running a live experiment in this gap whether they chose to or not.
The rethink Greg Nelson described is underway. It is also running roughly 18 months behind the deployment curve.
Kai Nakamura covers AI for The Daily Vibe.



