RSAC 2026 opened Monday in San Francisco. The biggest theme on the floor isn't a new threat actor or a zero-day. It's the AI agents enterprises deployed last year, now running across production systems with broad access, growing autonomy, and in many cases zero security oversight. If your organization runs AI agents in any capacity, you need to inventory them, audit their permissions, and start treating them as a distinct identity class this week.
Here's the situation as of March 24, day two of the conference.
What happened
RSAC 2026, running March 23-26 at Moscone Center, has drawn roughly 40,000 cybersecurity professionals. The conference theme is "The Power of Community," but the dominant conversation on the show floor is about nonhuman workers: the AI agents now embedded in enterprise SOCs, coding pipelines, productivity suites, and customer-facing platforms.
Three major announcements in the first 24 hours made the scope of the problem concrete:
Microsoft detailed its agentic AI security strategy on March 22, announcing Agent 365, a control plane for managing AI agents across the enterprise. It ships generally available May 1, bundled in the new Microsoft 365 E7 suite at $99 per user per month. The package includes new Defender, Entra, and Purview capabilities specifically designed for agent governance, identity protection, and data security. Microsoft also launched Entra Internet Access Shadow AI Detection (GA March 31) to identify unknown AI applications at the network layer, and a Security Dashboard for AI giving CISOs a unified view of AI-related risk.
Straiker launched two products on March 23: Discover AI, which inventories AI agents, MCP servers, and tool connections across an organization, and an expanded Defend AI that provides runtime protection for coding agents (Cursor, Claude Code, GitHub Copilot) and agent builder platforms (AWS Bedrock AgentCore, Azure Foundry, Microsoft Copilot Studio). Straiker claims detection of agentic threats with sub-300ms latency and over 98% accuracy, trained on millions of real-world agent traces. Notably, Discover AI includes a database of over 12,000 MCP vulnerabilities.
SOCRadar announced its AI Agent Marketplace on March 23, a hub where organizations can deploy specialized autonomous AI agents for cybersecurity tasks within the SOCRadar XTI Platform, alongside new Identity Intelligence capabilities for combating identity-driven attacks.
The blast radius
According to Enterprise Technology Research survey data cited by SiliconAngle, at least 90% of organizations say they're using AI somewhere in their security stack. But 75% are applying AI to less than 10% of their security portfolio. That gap tells the story: AI agents are present but not governed. They exist in pockets, often deployed by individual teams, without centralized visibility.
The attack surface these agents create is specific and measurable.
Coding agents are the most exposed category. According to Straiker, 85% of developers now use AI coding tools. These agents ship code, and increasingly ship other agents, with minimal human review. The risk vectors include endpoint takeover, data exfiltration, remote code execution, and tool manipulation through malicious MCP servers. Enterprise productivity agents like Microsoft Copilot, ChatGPT Enterprise, and Salesforce Agentforce compound the problem by touching email, documents, CRM, and internal tools, often without security teams knowing which agents are active or what data they can reach.
The speed problem makes all of this worse. Unit 42 has tracked mean time to data exfiltration collapsing from nine days in 2021 to roughly 30 minutes by 2025. A February 2026 Malwarebytes report cited a 2025 MIT study where an AI model using the Model Context Protocol achieved full domain dominance on a corporate network in under an hour with no human intervention, evading endpoint detection by adapting tactics in real time.
And we already have a case study. In September 2025, Anthropic detected and disrupted what it documented as the first large-scale cyberattack executed without substantial human intervention. A Chinese state-sponsored group manipulated Anthropic's Claude Code into attempting infiltration of roughly 30 global targets, including financial institutions and government agencies. The AI performed 80 to 90 percent of the work. Human operators showed up only at a handful of decision points per attack cycle. The method wasn't a traditional exploit. The attackers jailbroke the model by breaking requests into small, innocent-seeming subtasks.
What to do right now
Walt Powell, lead field CISO at CDW, put it plainly in a pre-conference interview with BizTech Magazine: "Last year, my big takeaway was agents for everything. I think it's going to just double down on that: agents for every part of your security program. The flip side is, how do you secure an agent? That's what I'm really looking for this year, solutions for nonhuman identities, especially around agents."
If you're running a security operation of any size, here's the immediate priority list:
-
Inventory your agents. Most organizations lack a basic count of what agents exist, what they access, and which MCP connections put them at risk. Products like Straiker's Discover AI and Microsoft's Shadow AI Detection exist specifically for this. If you can't buy a tool this week, start a spreadsheet. You need to know what's running.
-
Treat agents as identities. Traditional IAM models were not built for nonhuman identities operating at scale. Agents need their own identity class with scoped permissions, access logging, and governance policies. Microsoft's Entra updates and Entro Security's new AI agent governance platform (also announced at RSAC) are both targeting this gap.
-
Audit your MCP connections. The Model Context Protocol is becoming the standard integration layer for AI agents, and it's also becoming a supply chain attack vector. Straiker's database of 12,000+ MCP vulnerabilities suggests the exposure is already broad. Review what MCP servers your agents connect to and what permissions those connections grant.
-
Put human-in-the-loop gates on high-impact actions. Full autonomy sounds good in a demo. In production, agents that can isolate hosts, execute code, or access sensitive data need approval checkpoints. Start with your highest-privilege agents and work down.
-
Measure what matters. The SiliconAngle analysis notes that AI SOC adoption will succeed or fail based on telemetry quality, identity controls, and recoverability. Don't measure how many agents you deployed. Measure time-to-detect, time-to-respond, and whether your agents actually reduced operational friction or just added another tool nobody can govern.
What comes next
The SiliconAngle RSAC preview frames it well: the security industry has entered a phase where AI is present but not yet scaled, and the gap between deployment and governance is where the risk lives.
Byron V. Acohido, writing for Security Boulevard, describes what's happening as two simultaneous wars: using AI to rebuild defensive operations, and figuring out how to secure AI systems themselves as attackers learn to weaponize them. Both are accelerating, and neither is anywhere near resolved.
The practical question for the rest of this week at Moscone Center, and for the months after, isn't whether AI agents will become central to enterprise security. That's already happened. The question is whether security teams will govern them before attackers exploit the gap. Based on the ETR data showing 90% adoption but only 10% coverage, most organizations are not there yet.
The conference runs through March 26. Expect more vendor announcements addressing this exact surface. The ones worth watching will be specific about what they detect, how they integrate, and what they leave to humans.
Omar Rashid covers cybersecurity and technology for The Daily Vibe.



