Google spent three years building a custom AI chip supply chain. Then it made that supply chain the product.
The company is operating the most diversified custom silicon supply chain in the AI industry, with four design partners — Broadcom, MediaTek, Marvell, Intel — each assigned a distinct computational role. MediaTek's input-output modules on Google's current-generation Ironwood chip ran 20 to 30 percent cheaper than comparable alternatives, The Next Web reported. That cost advantage is proof that the coordination layer, the company that designs the system architecture and manages the supply chain behind it, is where the value is accumulating.
The chiplet economy is not a future scenario. It is a present one. Chiplets are small, specialized silicon dies, each engineered for a distinct computational task and packaged together to form a complete system — analogous to how the spinning jenny, power loom, and cotton gin were each specialized machines that together produced more cloth than any single artisan's tool could. The spinning wheel fragmented into purpose-built machines; the GPU is now doing the same.
On Ironwood, the fragmentation is literal. Each chip delivers ten times the peak performance of the TPU v5p, carries 192 gigabytes of HBM3E memory with 7.2 terabytes per second of bandwidth, and scales to 9,216 liquid-cooled chips in a single superpod producing 42.5 FP8 exaflops, The Next Web reported. Google's total expected TPU shipments are projected at 4.3 million units in 2026, scaling to more than 35 million by 2028. Anthropic has committed to up to one million of those chips, with access to approximately 3.5 gigawatts of next-generation TPU-based compute starting in 2027, The Next Web reported.
The bottleneck is shifting. It is no longer enough to design a fast chip. The new scarce resource is the ability to coordinate a chiplet supply chain — to take a system-level AI workload, decompose it into specialized dies, manage the inter-chiplet communication, and get the whole assembly fabricated at TSMC and assembled with advanced packaging.
That is where Broadcom is positioned. Broadcom currently holds more than 70 percent of the custom AI accelerator market, The Next Web reported. Marvell holds roughly 25 percent by most third-party estimates. The Google-Broadcom partnership was extended through 2031 in an expanded deal announced this month, CNBC reported. Meta last week committed to deploying one gigawatt of its own custom MTIA chips using Broadcom technology, CNBC reported.
Broadcom reported $8.4 billion in AI revenue in Q1 2026, up 106 percent year over year, and guided to $10.7 billion in AI chip revenue for Q2, Reuters reported. "We have line of sight to achieve AI revenue from chips in excess of $100 billion in 2027," CEO Hock Tan told investors in March — post-earnings call guidance, not a confirmed backlog number in an SEC filing. The custom ASIC market as a whole is expected to reach $118 billion by 2033, according to TrendForce data cited by InvestorPlace.
Custom AI chip sales are projected to grow 45 percent in 2026, while GPU shipments grow at roughly 16 percent, according to TrendForce data cited by InvestorPlace. The divergence tracks the shift in where the bottleneck lives: from chip design to supply chain coordination.
For AI infrastructure buyers and investors, the practical implication is concrete: within two to three years, choosing an AI compute stack may look less like selecting a GPU model and more like assembling a custom workstation. The chiplet economy turns hardware procurement into a mix-and-match decision, and whoever supplies the integration expertise to make that mix work will capture more value than whoever supplies any single chip.
Broadcom is positioned to own the coordination layer first. The question is whether the silicon industry notices before the next supply chain coordinator emerges.