The AI Doomsayer Who Found the Boring Solution
Jack Clark co-founded Anthropic.

image from GPT Image 1.5
Jack Clark, Anthropic co-founder and former AI risk warner, has developed a 'factory regulation' governance framework that treats AI oversight as a matter of governing outputs and production facilities rather than abstract technology. Anthropic has reached $19B annualized run-rate revenue, though actual cumulative commercial revenue stands at $5B through December 2025, with the gap illustrating how consumption-based billing inflates forward-looking metrics. The company launched the Anthropic Institute as an internal think tank while navigating a Pentagon dispute over federal contracting.
- •Clark's 'factory regulation' theory treats AI governance as analogous to oversight of facilities producing diverse outputs—cars, animals, and weapons—shifting focus from the technology itself to its production and products
- •Anthropic's $19B run-rate revenue represents a 28-day consumption snapshot extrapolated annually, while the $5B actual commercial revenue figure reflects realized billing—a distinction that matters for assessing growth sustainability
- •Enterprise customers comprise 80% of revenue and are billed on consumption basis, meaning a single large contract cycle can cause sharp swings in reported run-rate metrics
Jack Clark co-founded Anthropic. He spent years warning the world about catastrophic AI risk. Now he is running a think tank inside the company he helped build, and he has developed a deeply practical theory about what AI governance actually looks like: it is a factory regulation problem.
"AI is fundamentally like everything," Clark told Derek Thompson in a Substack interview published this month. "It is like a factory that produces cars, micro scooters, animals, and nuclear weapons all at the same time." The question society faces, he argues, is not how to govern the technology abstractly but how to govern the outputs — and the facilities that produce them. That is a more tractable framing than most of what comes out of the AI safety conversation, and it arrives at a moment when Anthropic is simultaneously generating some of the fastest revenue growth ever recorded in the technology industry and confronting the possibility that the U.S. government will decide what gets built next.
Anthropic recently surpassed $19 billion in annualized run-rate revenue, up from $9 billion at the end of 2025 and roughly $14 billion a few weeks prior, Bloomberg reported. The growth was driven by broad adoption of its AI models and products including Claude Code, the company's coding tool. Analysts have called it the fastest-growing business at scale ever recorded. Clark himself is not the one who would tell you to celebrate.
Here is the part the headline numbers obscure. In a court filing connected to the company's ongoing dispute with the Pentagon, Anthropic Chief Financial Officer Krishna Rao said the company had generated more than $5 billion in all-time commercial revenue through December 2025. Reuters Breakingviews calculated why the gap between that figure and the $19 billion run-rate is not a contradiction but is also not a clean endorsement of either number. Run-rate revenue, as Clark's own CFO effectively acknowledged in the filing, is a snapshot of the last 28 days of consumption-based sales multiplied by 13, plus monthly subscriptions multiplied by 12. It is designed to extrapolate, not to count what has actually been billed. Big businesses account for 80 percent of Anthropic's revenue and tend to be billed on a consumption basis, which means the headline number can swing sharply with a single enterprise contract cycle. Rao's $5 billion is the actual tally. The $19 billion is the rate at which things are happening — a real signal, but one that lives in a different universe from a GAAP revenue figure.
That distinction matters because the same week Bloomberg reported the run-rate acceleration, Anthropic launched the Anthropic Institute, an internal think tank focused on the longer arc of the company's social impact. Clark moved from his previous role to head the new organization. The founding members include Matt Botvinick, formerly of Google DeepMind; Anton Korinek, a professor on leave from the University of Virginia; and Zoe Hitzig, a researcher who left OpenAI. Clark has said he expects the think tank's staff to double every year for the foreseeable future. The company's public policy team tripled in size in 2025. These are not the numbers of a company that thinks the governance question is somebody else's problem.
Clark's factory analogy is the sharpest policy framing to come out of a major AI lab in some time, partly because it sidesteps the abstraction that usually paralyzes these conversations. Rather than debating whether AI is broadly good or dangerous, it asks what you do with a facility that makes things with widely varying social harm profiles. The answer, historically, has involved licensing, inspection, output liability, and worker safety rules — not a single global moratorium. "The main question we are going to have to deal with as a society is how do you govern those factories and how do you decide what the appropriate uses are of the things that come out," Clark told Thompson.
Whether that analogy survives contact with the actual Pentagon dispute is a separate question. The Trump administration designated Anthropic as a supply chain risk in February, effectively blocking government contractors from using its products, and the company filed two federal lawsuits challenging the designation in March. Depending on how courts interpret the relevant statute, Anthropic has said that hundreds of millions of 2026 revenue is at risk at minimum, and in the most severe case, multiple billions — a number that is non-trivial relative to the $5 billion in actual cumulative revenue the company has generated to date. Reuters separately documented that at least one customer paused discussions on a $15 million contract and two financial-services companies declined to finalize agreements worth a combined $80 million after the designation. The factory analogy works fine in the abstract. In practice, the factory is getting inspected by lawyers.
Clark's AGI prediction is also worth examining on its own terms. He told The Verge that he believes powerful AI will arrive by the end of this year or early 2027. That is a firm claim from someone who has spent years arguing that transformative AI was both possible and unpredictable. It is also notable for what it implies about Anthropic's own planning timeline — a company does not staff a think tank to double annually unless it expects to be operating in a meaningfully different environment for a long time.
One of the more revealing moments in the Thompson interview is Clark's explicit disagreement with Dario Amodei, Anthropic's chief executive, on the question of AI and unemployment. Amodei has spoken about displacement as a likely, near-term consequence of AI advancement. Clark's response was sharper and more political: "We are talking about one of the potential things that can happen, and I think it is worth noting that this is a choice. I do not agree with this, because I think it is a choice that we can make." That is an intra-company dispute on a question that most public discourse treats as settled — framed not as a technical forecast but as a policy and political one. It is the kind of thing a co-founder says when they are trying to move a conversation rather than protect a narrative.
What the Anthropic story is not, despite the revenue numbers, is a clean validation of the "AI is making money" thesis that gets recycled every quarter. The run-rate figures are real but volatile. The actual revenue is smaller and slower-building. The Pentagon situation is a genuine revenue risk that has already caused enterprise customers to pause deals. The governance framework is intellectually coherent but untested against the legal machinery of a government that has decided the factory needs to be on a blacklist. Clark himself seems to understand all of this. The factory analogy is not optimism. It is a description of what he thinks the problem actually is.
The Anthropic Institute will publish research on these questions. Clark will continue speaking. The $19 billion number will continue to get quoted without the $5 billion asterisk. The more interesting question is what happens when the factory analogy runs into a court ruling, a congressional hearing, or a mid-sized defense contractor that decided $80 million in contracts was enough reason to stop asking questions.
Editorial Timeline
7 events▾
- SonnyMar 27, 5:45 PM
Story entered the newsroom
- SkyMar 27, 5:45 PM
Research completed — 9 sources registered. Thompson-Clark interview is the primary source. Three durable findings: (1) The factory/fork governance analogy — AI is a factory that produces cars A
- SkyMar 27, 6:57 PM
Draft (1146 words)
- GiskardMar 27, 7:14 PM
- RachelMar 27, 7:35 PM
Approved for publication
- Mar 27, 7:35 PM
Headline selected: The AI Doomsayer Who Found the Boring Solution
Published
Sources
- shanakaanslemperera.substack.com— Shanaka Anslem Perera Substack
- theguardian.com— The Guardian
- reuters.com— Reuters
- understandingai.org— It still doesnt look like theres an AI bubble - Understanding AI
- derekthompson.org— What Is Anthropic Thinking? - Derek Thompson (Substack/The Atlantic)
- bloomberg.com— Anthropic Nears $20 Billion Revenue Run Rate Amid Pentagon Feud - Bloomberg
Share
Related Articles
Stay in the loop
Get the best frontier systems analysis delivered weekly. No spam, no fluff.

