OpenAI is offering private equity firms a guaranteed 17.5 percent return. Read that again. A company reportedly raising at a $10 billion pre-money valuation and offering preferred returns that would make a leveraged buyout fund blush is not a company with pricing power. It is a company that needs the money badly enough to promise outcomes that venture investors stopped accepting years ago.
Meanwhile, Anthropic grew revenue from $1 billion to $14 billion in roughly one year. That number — reported across multiple All-In Podcast segments in March — is not a benchmark victory. It is a unit economics story that the rest of the industry is sleepwalking past. (Bloomberg later reported the figure was climbing toward $19B by late February, suggesting the $14B run-rate was an early-month snapshot, not a ceiling.)
The public debate about AI moats has fixated on benchmark scores and capability rankings. The real moat is compute cost structure. And right now, Anthropic has one and OpenAI does not.
What Jensen Actually Said
On the same All-In episode that generated the moats segment Sacks and Chamath later riffed on, Chamath put the question directly to Jensen Huang: what is the moat when every enterprise software company is just reselling Anthropic and OpenAI tokens? Chamath's answer — the call to entrepreneurs is to know your vertical deeply, and then wait for the tools to catch up so you can imbue them with that knowledge — was met with repeated agreement from Jensen. The Singjupost editor's note paraphrased this as "specialized knowledge will become the ultimate moat," but that phrasing belongs to the editor's synthesis, not a direct transcript quote from Jensen. The substance of what Chamath argued, and Jensen endorsed by assent, was this: the foundation model layer is commoditizing faster than anyone expected. What matters is who can build on top of it with proprietary context, workflow depth, and domain expertise that cannot be replicated by prompt engineering alone.
This is consistent with what a16z has argued: AI strengthens moats like network effects, brand, and proprietary data. And the San Francisco Fed's VC research echoes it — sustainable competitive advantages in AI include proprietary data, domain-specific technology, and established distribution channels. Not GPT-5 versus Claude 4. Who owns the workflow.
But if that is the theoretical framework, the empirical signal is what is happening to compute costs.
The Trainium Effect
Anthropic announced in March that it plans to expand its use of Google Cloud TPUs to up to one million units, bringing over one gigawatt of capacity online in 2026. More importantly, the company disclosed a compute strategy that runs across three chip platforms: Google TPUs, Amazon Trainium2, and Nvidia GPUs. That is not hedging. That is building optionality into your cost structure at a level that no single-chip-supplier competitor can match.
Amazon's Trainium2 delivers meaningfully better price-performance than comparable Nvidia H100 instances — AWS's own documentation cites 30 to 40 percent better price-performance. For sustained workloads specifically, DataGravity's independent analysis found Trainium2 runs at approximately half the price of comparable H100 instances. Anthropic has reported 50 percent cost reductions and material throughput gains versus GPU configurations on specific training runs, according to the same analysis. If those numbers hold at scale, the implications are significant.
A company that can serve the same quality of inference at substantially lower cost has a structural pricing advantage. They can undercut competitors, grow faster, or take higher margins — or some combination of all three. The company that pays twice as much per token for equivalent capability is in a weaker position regardless of how the benchmarks shake out.
This is not theoretical. Anthropic now serves more than 300,000 business customers, according to its own announcement. Large accounts — those generating more than $100,000 in annual revenue — grew nearly 7x in the past year. The revenue trajectory from $1 billion to $14 billion in approximately 12 months tracks with what you would expect from a company that has a cost structure advantage and is using it to win enterprise deals.
OpenAI, by contrast, is in advanced talks to raise approximately $4 billion through a joint venture with TPG, Bain Capital, Advent International, and Brookfield Asset Management at a pre-money valuation of roughly $10 billion. The 17.5 percent guaranteed return being offered to those PE firms is not a sign of strength. It is a sign that conventional growth equity investors were not willing to back the terms Anthropic could command.
What This Means for Builders
The BCG AI unit's Matt Kropp put the enterprise dynamic plainly: there is a race to lock in as much enterprise as possible, because once a company has a customized AI model integrated into its systems, it becomes much harder to switch to a competitor. The companies winning that race are not necessarily the ones with the best benchmarks. They are the ones with the best pricing, the deepest workflow integration, and the most compelling enterprise sales motion.
Chamath Palihapitiya, who runs the 8090 fund, described his AI infrastructure costs as ginormous — more than tripling since November 2025 and trending toward $10 million per year. He has already migrated his codebase from Cursor to Claude Code citing cost constraints. The bottleneck is not access to capable models. It is the cost of running them at scale.
Anthropic appears to have solved that problem before many of its competitors recognized it existed. The diversified chip strategy — running TPUs, Trainium2, and Nvidia in parallel — gives Anthropic pricing flexibility that a single-supplier approach cannot match. If Trainium2 is delivering at the cost advantage the AWS and DataGravity data suggests it does, and Anthropic is the only major lab running it at serious scale alongside TPUs, that is a structural advantage that does not show up in MMLU or Chatbot Arena.
The IPO Question
Anthropic is reportedly targeting a public listing as early as October 2026, which could involve raising more than $60 billion. The company has pledged $50 billion to build custom data centers in the United States. Amazon has invested $8 billion in Anthropic since 2023, according to CNBC.
What Anthropic needs to show public market investors is a cost structure that supports durable margins. The diversified chip strategy is the mechanism. If Trainium2 is delivering at the cost advantage the AWS and DataGravity data suggests it does — and Anthropic is the only major lab running it at serious scale alongside TPUs — then the path to profitability is more credible than competitors who are paying H100 prices for every inference call.
The real question for founders and investors is not which model ranks first on the next benchmark. It is which company has a cost structure that lets it price aggressively, grow share, and invest in the application layer without bleeding money on every API call. The answer right now points in one direction.
The moat is specialized knowledge at the application layer. But to build on that layer at scale, you first need a foundation that does not bankrupt you. That is the moat nobody is talking about.