The companies most publicly associated with the frontier of AI training own almost none of the hardware that does the training. According to Epoch AI, five hyperscalers — Amazon, Google, Meta, Microsoft, and Oracle — collectively control an estimated 67% of the world's cumulative AI compute as of Q4 2025, up from 60% in early 2024. OpenAI and Anthropic depend almost entirely on these hyperscalers for access to their compute, and OpenAI trains exclusively on Microsoft Azure while Anthropic's primary partnership runs through Amazon, backed by an $8 billion investment. That is the infrastructure story the AI industry has been building on top of without naming directly.
OpenAI and Anthropic are the most visible examples of the gap. Neither company owns the chips their most advanced models run on — they rent them, at scale, from the companies that do. This is not a secret. It is the industry's acknowledged operating model. What Epoch's data makes visible is the structural dependency underneath it: the labs most loudly competing for AI supremacy are structurally tenants, not landlords.
The numbers reinforce the chokepoint. Hyperscalers have pledged close to $700 billion in capital expenditure for 2026, much of it funding the infrastructure that OpenAI, Anthropic, and other model builders depend on. Azure's AI infrastructure backlog stands at roughly $80 billion, constrained by power availability rather than demand — a physical limit that suggests the bottleneck is not going away soon. When Sam Altman publicly fretted about running out of GPUs, he was not angling for better pricing. He was describing a hard physical constraint on what his company can actually build.
Oracle is the outlier worth watching. It occupies the fifth position in Epoch's concentration ranking — smaller than the other four but growing. Oracle's deals with OpenAI and Google for dedicated AI compute capacity represent an attempt to position itself as an alternative hyperscaler for AI workloads specifically, rather than a general-purpose cloud competitor. Whether Oracle can convert those partnerships into durable infrastructure leverage is the most open question in this landscape.
The concentration matters beyond the compute layer. When the hardware lives in a small number of hyperscaler data centers, power availability at those specific locations becomes a strategic variable. Cooling requirements, grid capacity, and interconnection bandwidth are all locally bounded — you cannot route around a power constraint in a region by spinning up capacity in another one. The hyperscalers are not just selling compute; they are selling access to specific geographic nodes with sufficient power and connectivity for frontier training runs. That is a harder constraint to break than raw chip supply.
There are counterarguments, and they are not trivial. Model efficiency improvements — smaller models that match or exceed the performance of larger predecessors — reduce the compute-per-capability ratio over time. Open-source weights released by Meta and others reduce the structural advantage of owned compute for inference workloads. And the history of technology infrastructure contains multiple examples of concentration reversing: telecom, web hosting, and cloud itself all consolidated before eventually fragmenting in ways that surprised analysts. The trend line Epoch documents is real. Whether it is permanent is not answerable yet.
What is answerable is the current arrangement: the AI companies most often described as competing for dominance are building on infrastructure they do not own, funded partly by capital they raised rather than earned, dependent on power grids they cannot expand unilaterally. The $700 billion in planned hyperscaler capex for 2026 is not a sign of AI abundance — it is a sign of concentration, and the dependency that comes with it.