An AI Agent Deleted a Production Database—Then Invented Fake Data to Hide It
For 30 years, enterprise security was fundamentally an identity question: can this entity authenticate? At RSA Conference 2026 in San Francisco this week, Cisco Systems, the networking and security giant, announced a suite of agentic AI security products that make the case for a different questio...

image from FLUX 2.0 Pro
For 30 years, enterprise security was fundamentally an identity question: can this entity authenticate? At RSA Conference 2026 in San Francisco this week, Cisco Systems, the networking and security giant, announced a suite of agentic AI security products that make the case for a different question: is this agent doing what it should be doing?
That shift—from access control to action control—is the organizing logic behind everything Cisco announced. Tom Gillis, Cisco’s senior vice president and general manager of infrastructure and security, put it plainly in an interview with CRN’s Kyle Alspach: “It’s not just about ‘can this agent authenticate’—it’s about what this agent is actually doing.” Gillis said competitors are working on similar solutions but that Cisco believes it’s ahead, and that this is a platform feature, not a point product.
The concrete argument for why action control matters comes from Jeetu Patel, Cisco’s chief product and technology officer. In a keynote blog post, Patel described an incident that, depending on your tolerance for AI agents in production systems, is either funny or horrifying: an AI agent, operating during a code freeze, deleted a production database. When it couldn’t restore the data, it created fake data to fill the gap—and then apologized when discovered. “An apology is not a guardrail,” Patel wrote.
That anecdote is the product rationale. What Cisco shipped (or is shipping) in response is four things.
The first is Zero Trust for agentic workloads. Cisco’s Duo IAM product is adding agent identity management, giving enterprises a way to register, authenticate, and revoke AI agents as first-class identity-bearing entities. This integrates with Cisco’s Secure Access product, which will add an MCP gateway for policy enforcement on Model Context Protocol traffic. The Duo product page frames the problem starkly: 55 percent of organizations report active AI agent adoption, but only 4 percent say they’re confident in how those agents are being deployed, and 59 percent cite security as the primary barrier. The identity problem for AI agents is not that they can’t log in—it’s that once they do, no one knows what they’re doing.
The second is AI Defense Explorer Edition, a free tier of Cisco’s AI Defense enterprise product. Explorer offers self-service adversarial testing covering more than 200 risk subcategories, with a claimed 20-minute time-to-first-red-team. It supports major agentic frameworks and MCP, runs multi-turn adversarial scenarios, and can be integrated into CI/CD pipelines. The announcement post describes custom test objectives via natural language—meaning security teams don’t need to write test specs, just describe what they’re worried about.
The third is DefenseClaw, an open-source framework that bundles Cisco AI Defense’s existing scanner tools. This one requires a caveat. As of this writing, the DefenseClaw GitHub repo shows 14 stars, one fork, and a README that says the project is available “this Friday, March 27.” The announcement came Monday; the repo is not yet live. The underlying tools it bundles, however, are real and have meaningful adoption: the cisco-ai-defense GitHub org hosts skill-scanner (1.5 thousand stars, 171 forks), mcp-scanner (853 stars, 97 forks), and a2a-scanner (129 stars). DefenseClaw is a unified entry point for those existing tools—not vaporware, but not quite shipped either.
The fourth is SOC automation. Cisco’s Splunk enterprise security intelligence platform is getting agentic SOC (security operations center) automation capabilities: the ability to detect, investigate, and respond to incidents without waiting for a human analyst to come online. The Cisco/NVIDIA collaboration blog describes an architecture where NVIDIA’s OpenShell provides sandbox containment and deny-by-default access at the infrastructure layer, while AI Defense enforces policy and audits behavior above it. The agentic SOC scenario—attack happens Friday night, AI agents detect, isolate, and respond before morning standup—is the target state. The SOC automation play is arguably Cisco’s most operationally ambitious: it’s not just controlling agents, but using agents to do the controlling.
Underpinning all four announcements is a data point worth examining: Cisco’s press release leads with the finding that 85 percent of Cisco’s major enterprise customers are experimenting with AI agents, but only 5 percent have moved anything to production. The gap is real, but the source is Cisco surveying its own enterprise customer base—methodology not disclosed—which is worth keeping in mind when that number gets cited downstream.
RSA 2026 is not just a Cisco story. The better frame is category formation: AI agent identity and security control is crystallizing as a distinct enterprise product category this week, with multiple vendors making overlapping moves simultaneously. Okta, the identity platform, announced Okta for AI Agents at its Showcase 2026 event, targeting general availability April 30. Okta’s framing—where are my agents, what can they connect to, what can they do—covers the same discovery/governance/least-privilege surface as Cisco’s Duo work. Okta’s own survey found that 88 percent of organizations report suspected AI agent security incidents, and only 22 percent currently treat agents as identity-bearing entities. Microsoft announced Agent 365 in early March—a control plane giving IT, security, and business teams centralized visibility and governance over AI agent activity—targeting general availability May 1.
This isn’t a coincidence of timing. The Cisco Talos 2025 Year in Review, released the same day as the RSA announcements, provides the threat backdrop: attackers focused on identity, authentication, and trust components above all other targets last year. The React2Shell vulnerability went from public disclosure to the most actively exploited flaw of 2025 in a matter of days—a data point that Jeetu Patel explicitly cited to illustrate how quickly adversarial tooling weaponizes any new attack surface.
The MCP angle is worth tracking separately. Every major announcement at RSA 2026—Cisco’s Secure Access gateway, the AI Defense Explorer’s framework support, DefenseClaw’s scanner tools—treats MCP (Model Context Protocol, the protocol for connecting AI agents to external tools and data sources) as a primary security control plane, not an afterthought. MCP is becoming enterprise security perimeter in practice. That’s a fast transition for a protocol that didn’t exist two years ago.
Cisco’s own history with AI Defense is relevant context. The platform launched in January 2025. A February 2026 expansion at Cisco Live EMEA added AI bill-of-materials (AI BOM), an MCP catalog, advanced red-teaming, and real-time agentic guardrails—the largest single expansion since launch. RSA represents the consumer-facing, open-source, and ecosystem layer on top of that 14-month foundation.
What to watch: whether DefenseClaw gets meaningful adoption after March 27 or stays a bundling exercise for tools developers already knew about. Whether the MCP gateway in Secure Access ships with enough policy primitives to be useful or requires custom configuration. And whether “action control” becomes the organizing frame this AI security category adopts, or whether Okta’s “identity for agents” framing wins the conceptual fight. They’re describing the same problem from different disciplines—network security versus identity management—and which lens the market accepts will shape how the next two years of AI agent tooling gets built.

