The Emergency Compute Fix: What Amazon Spends $25 Billion on in Anthropic
Anthropic quietly lowered Claude's effort level in March. A reporter ran the same query at both settings and timed the results. The investment Amazon just pledged is the fix for what that test found.

Anthropic quietly changed how hard Claude works by default. In March 2026 it dialed the effort level — the dial controlling how much work the model puts into each answer — from high to medium, citing user feedback that Claude was consuming too many tokens. The practical result: faster answers, lower compute bills, and sometimes a model that stops short. To verify the change landed, this reporter ran a halting problem proof query at both settings. The halting problem asks whether a program can predict whether it will finish a task or loop forever — a canonical test of a model's reasoning depth. At medium effort the proof completed in under thirty seconds. At high effort, the same query timed out after thirty seconds. The result held three times.
Amazon just pledged another $25 billion to Anthropic. Here is what that buys: access to more than 1 million Trainium2 chips already running in production, nearly 1 gigawatt of new Trainium2 and Trainium3 capacity coming online by the end of 2026, and the option to slot in future chip generations as they ship. These are the largest AI infrastructure numbers attached to a single company relationship in the industry's history. The question the check cannot answer is whether it is enough, and whether it comes fast enough.
The numbers behind the push are real. Amazon is investing $5 billion now, with up to $20 billion more tied to commercial milestones, bringing its total potential commitment to $33 billion. Anthropic is committing to spend more than $100 billion on AWS technologies over the next decade. Its run-rate revenue has tripled in months, from roughly $9 billion at the end of 2025 to more than $30 billion today. Amazon has spent roughly $200 billion on capital expenditures this year, mostly on AI infrastructure.
What that growth exposed: Growth at this pace placed inevitable strain on Anthropic's infrastructure, impacting reliability and performance for free, Pro, Max, and Team users during peak hours. Anthropic introduced stricter usage limits during peak hours. Across developer forums and complaint threads, the word "degraded" started appearing next to Claude's name with increasing regularity. An Anthropic spokesperson pointed to a public changelog entry as disclosure; users noted that nobody reads changelogs the way nobody reads terms of service.
The effort-level change is the quiet tell. The company that built its brand on transparency is now being accused by users and rivals alike of hiding a capacity problem behind a product update. OpenAI's chief revenue officer — Denise Dresser, hired in December 2025, roughly four months into the role — put the competitive angle plainly in a memo circulated to employees reviewed by The Verge: Anthropic made a "strategic misstep" by not securing enough compute capacity and is "operating on a meaningfully smaller curve" than its competitors. OpenAI disputes Anthropic's stated revenue figures. Anthropic declined to answer those specific allegations on the record.
The hardware is the answer to that pressure. Trainium3, Amazon's third-generation AI chip, delivers roughly four times the training throughput of its predecessor and 2.52 petaFLOPS of FP8 compute per chip, per Amazon's own specifications corroborated by independent technical analysis. Project Rainier, the cluster Anthropic and Amazon built together, already runs on nearly 500,000 Trainium2 chips and is now being scaled. The 5 gigawatts of secured capacity is the infrastructure floor the growth spurt exposed as missing.
A company that tripled its revenue in a single quarter discovered that scale is not free. The compute it deferred buying now has to be purchased at emergency speed from a single benefactor. Amazon is buying exposure to an AI lab that also sells Claude on Google Cloud and Microsoft Azure, has a separate $30 billion Azure commitment, and is now promising $100 billion in AWS spend over ten years.
The milestones that unlock the next $20 billion remain private. Amazon has not disclosed what triggers those payments. If Claude's reliability problems persist, if a competitor ships something meaningfully better, the milestone structure becomes a leash rather than a lifeline.
The 5 gigawatts helps. The $33 billion commitment signals that Amazon is not walking away. The underlying constraint is not capital. It is time: building a frontier AI company at the pace the market now demands means the infrastructure is always behind the growth. You can buy your way out of that. But only if the person selling to you is willing to wait.





