On April 4, Anthropic cut off a category of users it had been quietly losing money on.
The company had been selling its $200-per-month Claude Max plan to developers running automated software — tools that loop through code generation, web searches, and document processing, sometimes consuming millions of units of text per session — while billing itself as unlimited. According to Anthropic's own published rates, a single heavy user of those automated tools was generating roughly $5,000 in monthly compute costs against a $200 payment. The company was eating a $4,800 monthly subsidy per user, and on April 4 it stopped.
The move came the same week that Anthropic CEO Dario Amodei published an essay arguing that AI demand across the industry is inflated and that competitors have not done the basic arithmetic on their infrastructure commitments. His quote from a February interview circulated widely: some companies, he said, have not written down the spreadsheet. They are just doing stuff because it sounds cool. The April 4 cutoff was the move that made the essay credible.
The pricing arithmetic is straightforward. When a $200 subscription can generate $5,000 in compute costs, the flat-rate model has broken. The cutoff was not a warning about the industry. It was a disclosure about Anthropic's own books — an admission that the subsidy was real, the users were real, and the margins were not.
The broader industry is arriving at the same conclusion by different paths. OpenAI's Nick Turley, who leads ChatGPT, said recently that unlimited plans may not survive: having an unlimited AI plan, he noted, is like having an unlimited electricity plan. It just does not make sense. Salesforce is rolling out a metric called agentic work units that tracks completed tasks rather than tokens burned — an implicit admission that token volume is a distorted measure of value. Ramp, which tracks corporate spending for thousands of companies, reports that AI token costs across its customer base have grown 13-fold over the past year. No finance team knows how to budget for that.
The enterprise ROI question is where this gets uncomfortable. Jen Stave, who leads the Digital, Data and Design Institute at Harvard Business School, has spoken with a dozen CTOs and CIOs about their AI deployments. The consistent finding: nobody has a reliable framework for measuring what they are getting for it. Companies are buying AI because competitors are buying AI, tracking adoption metrics like tokens consumed rather than outputs delivered, and hoping the productivity numbers materialize before the next board meeting. Databricks CEO Ali Ghodsi put it more bluntly: if your goal is to just burn a lot of money, there are easy ways to do that. Resubmit the query to ten places. Put up a loop that just does it again and again. The pricing tiers that make this possible — $5 per million tokens in, $25 per million tokens out, according to Anthropic's published rate card — are now the cost of doing business whether companies track them or not.
This is the backdrop against which Amodei described what he calls the cone of uncertainty in AI infrastructure: data centers take one to two years to build, so companies are committing billions now for demand they cannot verify. Being wrong by a couple of years, he said, can be ruinous. The April 4 cutoff suggests he believes his own warning — and that he believes it before his competitors do.
That critique is legitimate. It is also useful to notice who is making it. Anthropic is the company that just moved to per-token billing, cut off its heaviest subsidized users, and positioned itself as the realist in a room full of hype. Whether that positioning reflects genuine caution or strategic differentiation ahead of an IPO is a question investors will have to answer for themselves.