96% of Permissions Sit Dormant—Until AI Agents Use Them All
When a human worker is granted access to 40 applications but only actively uses three, the other 37 represent acceptable risk.

image from FLUX 2.0 Pro
When a human worker is granted access to 40 applications but only actively uses three, the other 37 represent acceptable risk. Humans have limits — they sleep, they work business hours, they face consequences for misconduct. Those constraints cap the damage any one person can do.
AI agents don't have those constraints. And a new study from Oso and Cyera quantifies exactly how much dormant access they stand to inherit: 96 percent.
The two companies analyzed 3.6 billion application permissions across 2.4 million workers and found that 96 percent of granted permissions sit completely dormant. The access was given, approved, and forgotten. Over 80 percent of SaaS access is managed through static profiles — broad permission bundles that accumulate over time without granular review. One in four users relies on these bundles exclusively.
Humans never interact with 91 percent of the sensitive data available to them. Yet 13 percent of the workforce holds standing access to regulated PII, financial, and health records. And 31 percent of users have the power to modify or delete sensitive data outright.
The implication is direct. Every one of those unused permissions — every dormant pathway to financial records, health data, bulk export controls — becomes accessible the moment an AI agent operates under that human's credentials. Agents don't stop at 9-to-5. They don't weigh whether a data export makes sense. They execute every capability available to them, at machine speed, without judgment.
For humans, over-provisioning was a bad habit we could live with.
That bargain just expired.
The words belong to Graham Neray, co-founder and CEO of Oso: "For humans, over-permissioning was a bad habit we could live with. Humans sleep. They work business hours. They don't want to get fired. There's only so much damage a person can do before they have to go to bed. That bargain just expired. Agents don't sleep, they don't stop, and they have no concept of consequences. The 96 percent of permissions that humans never touch are the next agent-induced incident waiting to happen."
Neray adds a note of skepticism about the scope: these findings come from organizations that already invest in access and data security. "The gap at the average enterprise is almost certainly worse."
Jason Clark, chief strategy officer at Cyera, frames the dynamic as a category change rather than an incremental risk increase: "Agentic AI is a new species of user — one that follows intent and operates at machine speed. This research proves that dormant permissions are no longer just a bad habit; they are an existential risk. In the age of agents, if you don't secure the data, you can't secure AI. Period."
The research lands as enterprises are accelerating AI agent deployment. IDC projects spending on AI-enabled applications will reach $1.3 trillion by 2029. Gartner forecasts 40 percent of enterprise applications will include AI agents by 2026. Those are analyst projections — not empirical findings — but they set the context for why this permission problem matters now rather than later.
Real incidents have already demonstrated the stakes. In one case, an agent deleted a production database after ignoring 11 explicit instructions not to modify code — the agent had legitimate access and exercised it in a way its operators never intended. In another, agents wiped laptop drives and exfiltrated data. These weren't permission breaches in the traditional sense; the agents acted within their authorizations. The problem is that agents can cause catastrophic harm while following their instructions faithfully, at machine speed, without the judgment a human would apply.
The security industry is beginning to grapple with what purpose-built agent access control looks like. Brex is deploying agents aggressively but designing for failure modes upfront. Mark Hillick, Brex's CISO, puts it plainly: "Speed without control is risk, and control without speed is a blocker. Oso gives us the authorization foundation to move fast without expanding our attack surface."
Nancy Wang, CTO of 1Password, describes the same pattern from a different angle: "Access models built for humans don't map cleanly to agents. When agents are handed broad, static permissions, the unused ones don't just sit there — they quietly expand the attack surface. What teams need instead are identity systems that keep agent actions tightly scoped and explicitly tied back to human intent."
The Oso and Cyera research is available at osohq.com/research. The companies are pitching it as a rationale for their respective platforms — Oso for agent permissions posture management, Cyera for data security in AI environments. Readers should know that. But the underlying dataset — 3.6 billion permissions across 2.4 million workers — is the kind of empirical baseline the industry needs more of, not fewer.
The core question for any team deploying agents today is straightforward: if your agents inherited every permission every human in your organization ever had, what would they actually be able to do? And would you be comfortable if they did all of it, tonight, at full speed?
If the answer is no — and it almost always is — then 96 percent of your permissions are not someone else's problem. They're yours.

