The enterprise wants AI agents in production. The enterprise has no idea who's in charge.
That gap — between deploying autonomous software at scale and actually being able to account for what it does — was the animating tension at RSAC 2026 in San Francisco this week. The conference floor heaved with vendors announcing frameworks, gateways, and governance layers for agentic AI. The survey data underneath those announcements told a starker story: 85 percent of major enterprises are experimenting with AI agents, but only 5 percent have moved them into production, according to a Cisco survey of its enterprise customer base. The other 80 percent, apparently, are stuck on something harder than the technology.
The numbers are moving fast in one direction. Globally, enterprises had 28.6 million active AI agents deployed in 2025 — a figure projected to surpass 2.2 billion by 2030, according to SC World reporting from RSAC 2026. But the infrastructure to secure, monitor, and audit those agents hasn't kept pace. A survey of more than 500 CISOs, conducted by API management firm Gravitee and published ahead of RSAC, found that 99.4 percent of organizations experienced at least one SaaS or AI ecosystem security incident in 2025 — with only three respondents reporting zero incidents. Nearly one-third experienced suspicious activity specifically involving AI agents. And 83.4 percent said their current tools cannot reliably distinguish between human and non-human behavior in their environments.
The problem has a name, even if the solution doesn't yet: non-human identity, or NHI — the catch-all term for API keys, OAuth tokens, service accounts, and now agent credentials that exist outside the human identity infrastructure that most security stacks were built around.
"We were not designed for autonomous agents, and it shows," said Hillary Baron, AVP Research at the Cloud Security Alliance, in a statement accompanying a separate CSA survey released at the conference. "Existing IAM approaches are showing strain as deployments scale." The CSA, which also announced a new nonprofit foundation dedicated to securing the agentic control plane at RSAC, found that 68 percent of organizations cannot reliably distinguish AI agent activity from human activity in their environments. Only 9 percent of organizations point to their IAM teams as the primary owner of AI agent identity.
The identity problem
At the heart of the security gap is a naming problem. AI agents don't have identities — not in the sense that IAM systems understand. They inherit human permissions, operate under shared service accounts, and expand their access dynamically as they execute tasks. The result: 89.2 percent of organizations claim strong OAuth governance, yet 27.4 percent were still breached through compromised OAuth tokens or API keys in 2025, per the Gravitee survey. Only about 20 percent of organizations treat AI agents with complete identity control, while most rely on reused API tokens and shared access across multiple agents — which makes tracking and accountability, as the survey puts it, effectively impossible.
The ownership structure is part of why. In most enterprises, AI agent deployments are commissioned by security leads (28 percent), development and engineering teams (21 percent), or IT (19 percent). IAM — the team with the deepest expertise in identity infrastructure — owns it at 9 percent. The people who understand identity least are the ones responsible for the infrastructure that most needs it.
The stakes are concrete. A misconfigured Supabase database in Moltbook, an AI agent social network, exposed roughly 1.5 million API authentication tokens days after launch — along with 35,000 email addresses and private messages between agents — as first reported by Wiz researchers. While Moltbook boasted 1.5 million registered agents, the database revealed only 17,000 human owners behind them — an 88-to-1 ratio. Most accounts were script-created bots. The exposure illustrated a pattern security teams are beginning to recognize: agents proliferate faster than the controls around them.
The ClawHavoc supply chain attack put a number on a different kind of risk. A coordinated campaign planted more than 800 malicious skills in ClawHub, the registry for OpenClaw — the open-source agent framework that went viral in late 2025 — representing roughly 20 percent of the entire catalog, according to Cisco's analysis. The malicious skills distributed infostealers, including a macOS credential harvester called Atomic Stealer (AMOS), through fake prerequisites. More than 135,000 OpenClaw instances were found exposed on the public internet, many vulnerable. Nation-state agencies in China restricted use of the software within weeks of its viral moment, as we reported in January.
The vendor response
The agent security market has noticed. The RSAC floor offered a rapid survey of the emerging vendor landscape. Cisco unveiled DefenseClaw, an open-source governance layer built on top of OpenShell — the secure agent sandbox originally developed at Nvidia, explicitly incorporating lessons from the OpenClaw security crisis — that bundles five scanning tools for agent skills, MCP servers, and A2A protocol traffic. Cisco also extended its Duo IAM platform to register AI agents with verified identities and map each to an accountable human owner. Jeetu Patel, Cisco's president and chief product officer, gave a keynote in which he described agents as "challenging the foundational assumptions every security architecture was built on."
Okta's agent security product — combining agent discovery and inventory with connectivity controls, an Agent Gateway, and a kill switch for revoking tokens and deactivating misbehaving agents — reaches general availability on April 30. Microsoft, which processes more than 100 trillion daily security signals across 1.6 million customers and protects one billion identities, announced that Agent 365, its portfolio of Microsoft 365 agents with unified security posture management, reaches general availability on May 1. Astrix Security described a four-method discovery architecture for NHIs: AI platform integrations, NHI fingerprinting, sensor telemetry, and a bring-your-own-service approach for enterprise-specific agent deployments. Vorlon released an AI Agent Flight Recorder and Action Center for forensic investigation and coordinated response in agentic ecosystems.
These are real responses to a real problem. But they share a structural limitation: they solve what can be solved at the tool level. The harder problem is organizational.
What the tools don't solve
The production gap — 5 percent versus 85 percent experimenting — suggests enterprises know this. They are running pilots not because the technology doesn't work but because nobody has answered the accountability question: when an agent acting autonomously does something wrong, who fixes it, who owns the exposure, and how do you prove what happened? The 90 percent of organizations that have expanded their privacy programs to account for AI, but where only 12 percent describe their AI governance as mature and proactive, tells you where most enterprises actually are.
The tools announced this week are the industry's attempt to answer the accountability question in real time. Whether they move the needle from 5 percent to something larger depends on whether organizations treat agent identity as an infrastructure problem — which requires IAM teams at the table from the start, not brought in after the fact — or as a point solution, which is what most vendors are currently selling.
The 2.2 billion agents projected by 2030 will need to belong somewhere. The industry is trying to build that somewhere before the number arrives.
RSAC 2026 runs through March 28 in San Francisco.