Anthropic has signed a deal for 3.5 gigawatts of Google's custom AI chips, a power commitment large enough to run a small city. The number that should catch the attention of anyone tracking the frontier AI race is that Broadcom, the chip company, sits at the center of this deal and a separate one with OpenAI for 10 gigawatts more.
The SEC filing tells the structural story. Hock E. Tan, Broadcom's chief executive, signed an 8-K on April 6, 2026 disclosing a long-term supply assurance agreement with Google that runs through 2031. Anthropic gets access to approximately 3.5 gigawatts of next-generation TPU capacity starting in 2027, routed through Broadcom. The consumption of that compute is contingent on Anthropic's continued commercial success, a hedge built into the contract language.
That same Hock Tan signed a separate 10-gigawatt co-development agreement with OpenAI last October. Combined, that is 13.5 gigawatts of committed AI compute flowing through one company that is not Nvidia.
Anthropic's own announcement this week explains the revenue scale behind the bet. The company's run-rate revenue has surpassed $30 billion, up from approximately $9 billion at the end of 2025. That is more than tripling in roughly 15 months. More than 1,000 business customers now spend more than $1 million annually with Anthropic, doubling from 500-plus in less than two months. The ARR trajectory, as tracked by Jefferies, shows Anthropic accelerating from $14 billion in February to $30 billion this week. OpenAI's ARR dropped from $25 billion to $24 billion over the same period.
Anthropic CFO Krishna Rao framed the deal as a major expansion of the company's November 2025 commitment to invest $50 billion in U.S. computing infrastructure. The vast majority of the new compute will be sited in the United States.
Analysts at Mizuho, led by Vijay Rakesh, estimated that Broadcom would record $21 billion in AI revenue from Anthropic in 2026 and $42 billion in 2027, a number that underscores the financial scale of what a TPU supply agreement means for Broadcom's top line. The underlying unit economics are where the real signal lives. Custom TPU infrastructure, according to The Next Platform's analysis, may cost on the order of $30 billion to $35 billion per gigawatt. Nvidia's AI systems and the datacenters built around them run approximately $50 billion per gigawatt. At utility scale, that gap compounds into billions of dollars.
What both Anthropic and OpenAI are betting, placing that bet through the same intermediary, is that the cost premium for custom silicon over general-purpose GPUs will be worth it for the performance and integration advantages at the frontier. Neither company is building for the mainstream market. Both are building for the top of the capability distribution, where the economics of general-purpose compute stop making sense.
The partnership framing, Anthropic partners with Google, OpenAI partners with Broadcom, obscures the more interesting structural fact. Broadcom builds the custom silicon that Google designs. Google provides the TPU architecture and manufacturing scale. Anthropic and OpenAI pay for guaranteed access to both. The deal is not just a supply agreement. It is a reservation of capacity at a moment when the gap between AI compute demand and available supply is still wide enough to make that reservation worth billions.
Anthropic closed its Series G round in February 2026 at a $380 billion valuation, giving it the capital base to commit to infrastructure spending at this scale.
What to watch: whether the contingent terms, the "continued commercial success" language in the 8-K, create tension as Anthropic's costs scale faster than its revenue, and whether other frontier labs follow the same infrastructure path or attempt to break the pattern by building their own silicon.