The enterprise has uploaded 18,033 terabytes of data to AI and machine learning applications in 2025. That is a 93 percent year-over-year increase. It is also the number that makes the security case for agentic AI different from anything the infrastructure world has had to manage before.
Zscaler's ThreatLabz team, which processes a substantial portion of global enterprise network traffic, published those figures this week. The scale is without precedent in the era of cloud-hosted SaaS. Finance and insurance sectors account for 23 percent of all AI and ML traffic by volume, more than any other industry. Engineering departments generated 48.9 percent of all AI usage, followed by IT at 31.8 percent and marketing at 6.9 percent. OpenAI services was the top LLM vendor every month of 2025.
The usage numbers are not the story. The story is what those numbers mean when something goes wrong.
Deepen Desai, executive vice president for cybersecurity at Zscaler, put it plainly: 100 percent of AI systems and applications observed had critical vulnerabilities. His team's red team work found that 90 percent of AI systems were compromised in under 90 minutes once an attack was underway. The median time to first critical failure was 16 minutes. Those numbers assume a system that is reachable. They do not account for agents that can initiate their own outbound connections.
ChatGPT alone generated 410 million data loss prevention policy violations across Zscaler's customer base last year. To understand what that number means in practice: a DLP violation is an attempt to send sensitive data somewhere it should not go. Four hundred and ten million attempts, from one product, inside one year, across enterprises that presumably have DLP policies and presumably have security teams. The data was uploaded anyway.
The DLP number is a proxy for a structural problem. Organizations are routing enterprise data to AI applications faster than they are updating their security policies to account for what those applications might do with it. The 410 million violations represent the gap between the data that left and the data that was allowed to leave. That gap is not a configuration error. It is an architectural mismatch between legacy DLP tooling, built to monitor human users and known applications, and AI agents that optimize for the objective they were given regardless of whether that objective includes data governance constraints.
The OWASP Top 10 for Agentic Applications, published in 2026, names this the Least Agency principle: autonomy is a feature that should be earned, not a default setting. If you give an agent a blank check to solve a problem, you have created an inside threat that a single malicious prompt can exploit. That is not a hypothetical. It is how most enterprise agent deployments currently work.
The threat is not confined to data exfiltration. Desai warned that agentic AI will automate cyberattacks at a scale that manual tooling cannot match. Gartner independently predicted that within two years AI agents will accelerate the time it takes threat actors to hijack exposed accounts by 50 percent. The automation vector cuts both directions: organizations deploying agents to move fast and attackers using agentic tooling to move faster.
Forrester's forecast is starker still: the firm expects an agentic AI deployment will cause a publicly disclosed data breach in 2026, leading to employee dismissals. That is a specific prediction with a specific consequence attached. It is the kind of forecast that looks conservative until it does not.
Engineering teams account for nearly half of all enterprise AI usage. Those are the systems that write code, manage infrastructure, and touch production environments. An agent operating with corrupted context in a production engineering system is a different category of risk from an agent summarizing a meeting transcript. The concentration of AI usage in engineering, combined with the 100 percent critical vulnerability rate across observed systems, points toward a class of incidents that have not yet occurred at scale but whose preconditions already exist in most enterprise deployments.
The 18,033 terabytes is not the problem. The problem is that the organizations which uploaded that data are, in most cases, operating with security tooling designed for a world where the data stayed inside the perimeter and the users were human. Neither assumption holds in an agentic environment. The 410 million DLP violations is the visible symptom. The architecture underneath is what needs fixing.
What to watch: whether the Zscaler data prompts a wave of enterprise DLP policy updates, or whether the gap between deployment velocity and security maturity simply widens until something breaks publicly. The Forrester forecast gives it twelve months.