The AI agent executed the action. The logs say it was authorized. The question is whether that authorization can be cryptographically tied to a specific human who made a deliberate decision in that moment -- or whether 'authorized' means a service account token sitting in environment variables since 2023.
Albert Biketi, Chief Product and Technology Officer at Yubico, framed it precisely at RSAC 2026: "The hard problem in agentic AI security is accountability: can you prove a specific human approved a high-consequence action?"
That sentence is doing a lot of work. It is not asking whether AI agents can be authenticated. They can -- with service accounts, API keys, and machine identities. The harder problem is proving human intent at the moment of a specific action. Those are different things, and existing IAM infrastructure mostly handles the first one.
Why existing identity systems weren't built for this
Traditional IAM was designed around sessions. A human logs in, gets a token, uses it. The assumption is that a valid token means the human is present. That assumption started breaking with service accounts. It breaks completely with autonomous agents.
When a human delegates work to an AI agent, the agent may make dozens of consequential decisions over hours or days -- each one without real-time human oversight. By the time something goes wrong (a misconfigured permission scope, an unexpected API call, a file write that shouldn't have happened), the trail leads back to a token, not a decision.
Privileged access management has the same structural gap. PAM was built to control which credentials can access which systems. It was not designed to answer: did the person who owns these credentials consciously authorize this specific action, right now, with full context?
This is the distinction that RSAC 2026 surfaced as the central identity problem of the agentic era.
The Yubico/IBM/Auth0 framework: hardware attestation at the approval layer
At RSAC, IBM, Auth0, and Yubico announced a 'Human-in-the-Loop' authorization framework that attacks this problem at the cryptographic layer.
The architecture combines IBM WatsonX for agent orchestration, Auth0's CIBA (Client-Initiated Backchannel Authentication) standard for identity flows, and Yubico's YubiKey hardware for the actual approval gesture. When the agent needs to take a high-stakes action, it triggers a CIBA flow that pushes an approval request to the human. The human responds with a physical hardware key tap. That approval is cryptographically signed and logged.
CIBA is an OpenID Connect extension built for decoupled authentication -- cases where authentication happens on a different device or channel than the one requesting access. It was originally designed for scenarios like approving a bank transaction on your phone while the request originates from a desktop browser. The mapping to agent authorization is reasonable: the orchestrator requests approval; the human is on a separate channel entirely.
The YubiKey adds hardware attestation on top. The signature includes proof that the approval came from a specific registered physical device, not a software token that could be compromised or replayed. That is the difference between 'someone clicked approve in a browser' and 'this specific hardware device was physically present and activated by a human hand.'
The question is whether this holds in production at scale. CIBA flows add latency -- you are waiting for a human to physically respond before the agent can proceed. For high-stakes, low-frequency actions (financial approvals, privileged system changes), that friction is the point. For agents making hundreds of routine decisions per minute, it is a non-starter. Any real deployment will need a clear taxonomy of which actions require hardware attestation and which can proceed on scoped tokens alone.
Role Delegation Tokens: identity built for agents, not inherited from humans
The second announcement addresses a different part of the same problem. Yubico and Delinea (which acquired StrongDM last year) announced integration of hardware-attested Role Delegation Tokens with the Delinea Platform and StrongDM ID, an identity layer designed specifically for AI agents. Early access is Q2 2026.
RDTs give agents scoped, time-bound identity that is explicitly not a repurposed human credential. Rather than an agent inheriting a service account (which accumulates permissions over time and has no natural expiry tied to a specific task), the agent receives a token scoped to: this role, this session, these specific permissions, delegated by this named human.
The hardware attestation ties the delegation itself to a physical device. The same human who taps a YubiKey to approve a privileged action can be the one who issued the delegation. That creates an auditable chain: human identity (hardware-attested) issued this role delegation, which authorized this agent, which performed this action.
This is architecturally cleaner than anything currently in production. Whether it maps to real enterprise deployment patterns -- heterogeneous agent frameworks, multi-cloud IAM, existing PAM investments -- is what the Q2 early access period will actually test.
The scale of the gap
The context around these announcements explains why the identity problem is getting this much focus. According to Nudge Security, 80% of organizations have encountered agentic AI risks related to improper data exposure and unauthorized system access. An ArmorCode/Purple Book Community survey of over 650 enterprise leaders found that 90% claim visibility into their AI footprint -- but 59% of those same respondents confirmed or suspect 'shadow AI' operating in their environment. Both numbers cannot be accurate, which says a lot about how much enterprises actually know about what their agents are doing.
Nudge Security also launched AI Agent Discovery at RSAC, covering Microsoft Copilot Studio, Salesforce Agentforce, and n8n. It surfaces unauthenticated MCP connections, orphaned agents, and risky integrations. The MCP angle is worth watching: as Model Context Protocol adoption grows, agents are calling external tools over connections that often carry no authentication at all. Discovery is the prerequisite step before accountability is even possible.
Saviynt launched what CEO Sachin Nayyar called their 'most significant release for AI agents and LLMs' -- an Identity Security for AI platform covering the full stack. As Nayyar put it: 'You need all the pieces of identity management -- core identity management, posture management, privileged access management, vaulting, enforcement -- everything running together at AI speed.'
What this means for practitioners
The hardware-attested frameworks announced at RSAC are not something most security teams will ship this quarter. CIBA flows and RDTs require identity infrastructure investment and workflow redesign. They are best understood as proof of what the architecture can look like, not as drop-in solutions.
What is actionable now: treat every new agent deployment as an identity audit trigger. Map which service accounts your agents are inheriting. Verify those accounts are scoped to minimum necessary permissions and have expiry policies. Inventory unauthenticated MCP connections before someone else does it for you.
The longer-term question is whether the pattern of agents inheriting human credentials ever made sense, or whether it was simply the path of least resistance when teams were moving fast. Biketi's framing is a useful audit test: can you prove a specific human approved each high-consequence action your agents are taking? If the answer is 'we have a log showing the agent used a shared service account,' that is a paper trail to an authentication artifact -- not accountability.
As Sangram Dash, CISO at Sisense, put it: 'The greatest AI security threat isn't what organizations can't see, it's what they can see but can't govern fast enough to stop.' The visibility half of the problem is being solved. The governance half is where the real engineering work begins.
Kai Nakamura covers AI for The Daily Vibe.



