AI agents are flooding into enterprise systems faster than anyone has figured out who is supposed to manage them. A survey of 228 IT and security professionals, published by the Cloud Security Alliance on March 24 at RSA Conference 2026, found that 85 percent of organizations now run AI agents in production environments — but only 9 percent point to their identity and access management teams as the primary owners of AI agent identity. The team that knows identity best is almost entirely absent from the one place where identity is most on fire.
The data, conducted by the CSA in January 2026 and commissioned by Aembit, a non-human identity platform backed by $59.6 million in total funding (Tracxn, September 2025), paints a picture of an industry that deployed its way into an infrastructure problem. Sixty-eight percent of organizations say they cannot clearly distinguish between activity performed by an AI agent and activity performed by a human. Seventy-three percent expect AI agents to become vital to their operations within the next year. The gap between ambition and control is not theoretical — it is the current state of enterprise AI.
"Existing IAM approaches were not designed for autonomous agents and are showing strain as deployments scale," said Hillary Baron, AVP of research at the CSA, in the survey release. "AI agents are inheriting human permissions, operating under shared accounts, and expanding the attack surface in ways that existing IAM tools were not designed to handle," said David Goldschlag, CEO of Aembit, in the same release.
The structural finding is in the ownership numbers. Twenty-eight percent of organizations say security leads own AI agent identity. Twenty-one percent say development or engineering teams do. Nineteen percent say IT. Nine percent say IAM. Those four groups account for 77 percent of primary ownership — and three of them are improvising. IAM is the discipline built around the question of who has access to what, and it is not in the room.
The identity patchwork
The problem is not just organizational. It is architectural. When the CSA asked how agents are actually provisioned, the answers reveal a credential landscape with no coherent audit trail. Fifty-two percent of organizations use workload identities for their AI agents. Forty-three percent rely on shared service accounts. Thirty-one percent allow agents to operate under human user identities. Those numbers overlap — an organization can be using all three — which means the same agent may be acting under multiple credential types simultaneously, none of which is designed to track what an autonomous system actually did.
The consequences show up in the over-privilege data. Seventy-four percent of respondents say AI agents often receive more access than necessary. Seventy-nine percent say agents create new access pathways that are difficult to monitor. When an agent can act across systems — pulling from a data source, writing to a ticketing queue, triggering a deployment pipeline — the permissions it inherited from a human user or a shared service account become a map of everything it could touch, whether or not that was the intent.
Thirty-three percent of organizations do not know how often AI agent credentials are rotated. Thirty-two percent are not certain how much time is required to implement and maintain authentication for a typical AI agent. Only 22 percent report that access frameworks are applied very consistently to AI agents. These are not edge cases. They describe the mainstream.
Containment as governance
When something goes wrong — an agent accesses data it should not, or behaves in an unexpected way — the response options reveal how far real-time policy enforcement has to go. Forty-nine percent of organizations say their primary containment action is disabling identities or revoking tokens. Forty-two percent say they terminate the compute environment where the agent runs. Thirty-three percent modify access policies in real time.
The first two are emergency responses. Revoking a token or killing a compute instance stops the incident. It does not prevent it, and it does not explain it. Real-time policy modification — changing what the agent is permitted to do while it is doing it — requires the identity infrastructure that 91 percent of organizations have not assigned to anyone. Containment is a substitute for the access control that was never built.
Prosus, the South African technology investor, deployed 37,000 AI agents as part of an AI-first pivot across its portfolio. WeBuyCars, a Prosus subsidiary, has used AI agents to purchase more than 2,800 vehicles autonomously, according to ITWeb. At that scale, the question of which agent can touch which system is not a theoretical governance concern — it is a daily operational risk.
The Aembit note
Aembit co-developed the questionnaire with CSA analysts and financed the project; CSA conducted the survey and performed the data analysis independently. This is standard industry practice for vendor-commissioned research, and it means the specific questions were shaped by the company selling the solution to the problem the questions identify. The underlying data — 228 respondents, January 2026 — is real, and the findings track with structural IAM challenges that predate this survey. The conflict is worth a sentence. It is not disqualifying.
The CSA also announced at RSA 2026 the formation of the CSA Press Release, a 501(c)(3) nonprofit dedicated to what it calls securing the agentic control plane. The foundation is a sign that the industry is beginning to name the problem publicly. Naming it and solving it are different work.
What this is
The headline number — 68 percent cannot distinguish agent activity from human activity — sounds like a surveillance problem. It is, in part. But the more structurally significant number is 9 percent. IAM teams were built to answer the question of which principal can perform which action against which resource. That is exactly the question that agentic AI makes acute. The fact that they are not the ones being asked to answer it, at 91 percent of organizations, means the people with the most relevant expertise are watching from the sidelines while everyone else guesses.
This is not a story about AI being dangerous. It is a story about infrastructure being built without the people who know how to build it safely in the room. That is a familiar pattern in technology adoption — and it is always easier to fix before the agents have already ordered 2,800 cars on their own.