When Francis deSouza, Google Cloud's chief operating officer, told reporters that security teams now need AI to fight AI, he wasn't making a vision statement. He was drawing a conclusion from a number: 22 seconds.
That's how long it now takes, on average, for an attacker who has gained initial access to a corporate system to hand that access off to a secondary threat actor, whether a ransomware gang or a data exfiltration crew. Three years ago, the same handoff took eight hours. The data comes from Google's M-Trends 2026 report [[1]](https://cloud.google.com/blog/products/identity-security/next26-redefining-security-for-the-ai-era-with-google-cloud-and-wiz), and it is the number that makes every other announcement at Google Cloud Next 2026 worth paying attention to.
The logic is simple. If attack chains move at machine speed, defenders cannot move at human speed. You need agents.
Google Cloud shipped three new Security Operations agents at Next this week [[1]](https://cloud.google.com/blog/products/identity-security/next26-redefining-security-for-the-ai-era-with-google-cloud-and-wiz): a Threat Hunting agent in preview that proactively looks for novel attack patterns, a Detection Engineering agent in preview that identifies coverage gaps and generates new detection rules, and a Third-Party Context agent coming soon that enriches workflows with external data. These join the Triage and Investigation agent, now generally available, which Google says processed over five million alerts in the past year and cut a typical thirty-minute manual analysis to sixty seconds.
The announcement is real and the numbers are real. But the framing that matters isn't the products — it's the economics.
The CISO Dilemma Nobody Is Naming
Here is the tension the announcements expose: AI security agents are necessary, but they are also a new attack surface. Prompt injection, tool poisoning, sensitive data leakage — these are the risks that come from putting AI systems in the loop of security decisions. Google knows this. The fact that it shipped Agent Identity, Agent Gateway, and Model Armor on the same day as the new agents isn't coincidence. It's an admission [[1]](https://cloud.google.com/blog/products/identity-security/next26-redefining-security-for-the-ai-era-with-google-cloud-and-wiz).
Agent Identity gives every agent a unique cryptographic ID with scoped authentication and human delegation built in. Agent Gateway enforces policy across agent-to-agent and agent-to-tool connections, inspecting MCP and A2A protocol traffic. Model Armor adds runtime protection against prompt injection and data leakage, integrating with Agent Gateway, Agent Runtime, LangChain, and Firebase.
In other words: Google spent $32 billion acquiring Wiz, closed that deal on March 11, 2026 [[2]](https://www.securityweek.com/wiz-joins-google-cloud-as-landmark-acquisition-closes/), and five weeks later shipped a product roadmap in which the headline is the new agents and the subtext is the controls for those same agents. The governance layer exists because the threat model demanded it.
Autonomous Response as a Line Item
The economic argument for this stack is not that it automates security — it's that it changes what security costs.
The traditional SOC model is reactive: analysts respond to alerts, prioritize by severity, and work through a queue. That model scales with headcount. Google's pitch is that AI-augmented security operations change the unit economics. The Triage and Investigation agent's five-million-alert processing run is presented not as a proof of concept but as a production metric. The IDC data Google cited — reductions in mean time to detect and respond, fewer false positives, higher analyst productivity — is the ROI case.
If that case holds, a CISO can justify an agentic security platform not as an addition to the team but as a restructuring of what the team does. Human expertise moves up the value chain: from triage to oversight, from response to governance design.
Wiz's Role and the Integration Question
Wiz, which joined Google Cloud in March [[2]](https://www.securityweek.com/wiz-joins-google-cloud-as-landmark-acquisition-closes/), brings the cloud security graph that Google lacked. At Next, Wiz announced support for Databricks and new agent studios — AWS Agentcore, Gemini Enterprise Agent Platform, Azure Copilot Studio, Salesforce Agentforce — plus a dynamic AI Bill of Materials that inventories AI frameworks, models, and IDE extensions across an environment [[1]](https://cloud.google.com/blog/products/identity-security/next26-redefining-security-for-the-ai-era-with-google-cloud-and-wiz). The AI-BOM is the shadow AI detection tool: it surfaces what is writing code in your stack, sanctioned or not.
The integration timeline is notable. Google's acquisition of Wiz closed March 11. The Next announcements shipped less than six weeks later. That is fast for a $32 billion deal. It suggests either that the integration planning started long before the deal closed, or that the product roadmap was already aligned, or both. Either way, it is the strongest evidence so far that mega-acquisitions in cloud security can operationalize at startup speed.
What Is Not Yet Proven
The 22-second handoff figure is directional, not diagnostic. It comes from M-Trends 2026 [[1]](https://cloud.google.com/blog/products/identity-security/next26-redefining-security-for-the-ai-era-with-google-cloud-and-wiz), which draws on Mandiant's incident response data — a self-selected cohort of organizations that already pay for Mandiant and Wiz. It reflects a real trend in adversary behavior, but it is not a cross-industry benchmark. Treat it as the problem statement, not the proof.
The Agent Identity and Gateway products are real and generally available or in preview depending on the component. But Google did not name a customer running Agent Identity in production at announcement time. The governance stack solves a real problem — but whether enterprises are already feeling that problem acutely enough to buy the solution is still being determined.
The Threat Hunting and Detection Engineering agents are in preview. The Third-Party Context agent is coming soon. These are not GA products. That matters for anyone evaluating this as a current-state deployment rather than a roadmap.
Independent coverage from The Register [[3]](https://www.theregister.com/2026/04/22/google_unleashes_even_more_ai) noted the breadth of the agentic security push, with one observer describing the governance tools as Google attempting to "own the guardrails" for a class of technology it is simultaneously selling.
The Bottom Line
Google Cloud's security announcements at Next are not really about AI agents that hunt threats. They are about the infrastructure layer that makes autonomous security operations survivable — identity, policy enforcement, runtime protection — and about the economic argument that the shift from reactive labor to proactive capability is a line item decision, not a philosophical one.
The 22-second window is the pressure point. The governance tools are the answer. The question for buyers is whether those two things together constitute a product worth deploying today, or a preview of what the industry will need once the attack surface catches up to the sales pitch.