For two years, the AI industry shipped agent systems with a structural flaw that the Open Web Application Security Project, a cybersecurity consortium that tracks the most dangerous application vulnerabilities, had already flagged as a top-tier risk. The fix Anthropic described in an engineering blog post Tuesday is not subtle: it amounts to rebuilding the basic memory architecture of an AI agent from scratch.
Anthropic, the AI safety company behind Claude, launched Managed Agents in public beta on April 8, 2026. The system lets developers deploy long-horizon AI agents that can browse the web, write and run code, and call external tools on their behalf. Early customers include Notion, the productivity platform; Sentry, the error-monitoring service; Asana, the work-management tool; and Rakuten, the Japanese e-commerce and fintech conglomerate. Pricing is set at $0.08 per agent runtime hour plus model usage charges, according to SiliconANGLE.
The architectural change is the story. In the coupled design that most of the industry still uses, an AI agent generates code inside the same container where its credentials live. That means if an attacker injects a malicious prompt through a compromised webpage the agent reads, the injected instructions can read the environment and steal whatever credentials are present: Git tokens, OAuth handles, database passwords all sitting in the same sandbox as the generated code. According to the OWASP Top 10 for Agentic AI 2026, prompt injection combined with credential theft is a documented attack chain, not a theoretical one.
Anthropic's fix is called the credential vault. The idea, described in the engineering blog post by Lance Martin, Gabe Cemaj, and Michael Cohen, is to keep credentials entirely outside the sandbox where generated code runs. For Git access, the system clones repositories and configures local remotes during sandbox initialization so the agent never handles the token. For third-party APIs that use OAuth, a dedicated MCP proxy fetches session-associated credentials from the vault and makes the external call; the harness never sees the tokens.
"The structural fix was to make sure the tokens are never reachable from the sandbox where Claude generated code runs," the team wrote.
This follows the same pattern as operating system virtualization. A compromised process in a container cannot reach the host's root credentials because the kernel enforces a boundary. Anthropic's credential vault enforces an equivalent boundary between the agent's reasoning layer — what the team calls the brain — and the execution environment where code actually runs. The brain and the hands are separate processes. The session log that tracks everything the agent has done is also separate, durable, and recoverable: if the harness crashes, a new one can boot with the same session ID and resume from where it left off.
The performance numbers improved materially after the decoupling. According to the blog post, p50 time-to-first-token — how fast the median agent starts producing output after receiving a task — dropped roughly 60 percent. The p95 figure, covering the slowest 5 percent of tasks, dropped over 90 percent. Decoupling means the brain is no longer blocked waiting for the hands to finish a long-running operation.
Rakuten deployed specialist agents across product, sales, marketing, finance, and HR within a week, according to a case study published alongside the announcement. Sentry paired its Seer debugging agent with a Claude-powered agent that writes patches and opens pull requests, giving developers a single flow from a flagged bug to a reviewable code fix, according to the same source. Notion and Asana have not disclosed specific use cases publicly.
Anthropic's annualized recurring revenue has surpassed $30 billion as of April 2026, roughly three times higher than in December 2025, Wired reported. The company did not break out revenue attributable specifically to Managed Agents.
The question enterprise buyers should be asking is not whether to adopt agentic infrastructure — that decision appears effectively made, given the velocity of deployment at companies like Rakuten — but whether their current agent vendor's credential architecture is actually immune to the OWASP-class attacks that Anthropic explicitly designed around. The coupled design is still the industry default. Changing it requires rebuilding core infrastructure, not just adding a guard layer. That work is non-trivial, and for buyers who have already committed to a platform, it may mean waiting for their vendor to catch up rather than migrating.
What the Anthropic post makes concrete is that the security boundary in agentic AI is not a software problem. A guard layer that inspects prompts before they reach the model is a policy control, not a structural one. The vault pattern removes the attack surface entirely by making credentials physically unreachable from the execution environment. Whether that tradeoff is available to every agent framework depends on how the underlying runtime is designed.
Managed Agents is available in public beta as of April 8, 2026 through the Claude Console, Claude Code, and a new CLI.