The Company That Fired Its Customers First
Block cut 4,000 jobs in February 2026. CEO Jack Dorsey said AI had made those roles unnecessary, and that within a year, most companies would reach the same conclusion. He said it the way someone describes gravity: not as a choice but as a condition of the world. What he did not say, and what a new paper from researchers at the University of Pennsylvania and Boston University argues, is that Block was not making a bet on AI. It was following the only rational move in a game that every company has to play, regardless of what they know about where it ends.
The paper, "The AI Layoff Trap" by Brett Hemenway Falk and Gerry Tsoukalas (arXiv:2603.20617, March 2026), builds a task-based economic model in which firms compete by automating work. It is a preprint, not yet peer-reviewed — the piece rests on a model still under academic evaluation. The mechanism is a demand externality: when a company replaces workers with AI, it captures the cost saving privately, but the lost purchasing power falls on every company that sells to those workers. A firm that lays off its customer-support team does not just cut costs. It removes a customer from the broader economy, and every competitor absorbs a share of that loss. Each company doing the rational thing produces an irrational collective outcome.
The authors call the result a Prisoner's Dilemma. Every firm benefits from automating. No individual firm can afford not to. But if all firms automate simultaneously, they destroy the consumer demand that their own revenues depend on. The gains are private. The losses are shared. This is not a metaphor. It is the structure of the game, and the authors prove it mathematically using a competitive task-based framework.
Block is the real-world case study. Four thousand people lost their jobs. Dorsey named AI as the cause and predicted the rest of corporate America would follow. The paper uses this not as proof of the mechanism but as a concrete anchor for what the competitive logic already implies: given the incentive structure, Block's decision was not a choice. It was the only equilibrium.
The Red Queen Effect, or Why Better AI Makes This Worse
The most counterintuitive result in the paper is what the authors call the Red Queen Effect. Intuitively, you might expect that smarter AI would resolve the problem: if the technology is more capable, the displaced workers can be retrained, the productivity gains are larger, the transition is smoother. The model produces the opposite conclusion.
When AI improves, the incentive to automate before your competitors does too. Each firm sees a larger market-share advantage from moving first. At the symmetric equilibrium, where every firm is equally automated, those advantages cancel out. What remains is the additional demand destruction from higher automation rates. Better AI does not mitigate the externality. It amplifies it.
The reason is structural. In a competitive market, no single firm can capture the demand gains from automation — those gains are competed away through lower prices. But every firm fully internalizes its own cost savings. The asymmetry widens as the technology improves: the private benefit of automating grows while the social benefit of doing so shrinks. More automation, at higher capability levels, produces more destroyed demand and no net gain in market position for anyone.
Why the Comfort Policies Do Not Work
The paper tests six proposed remedies against the mechanism. Each one fails for a specific reason rooted in the structure of the problem.
Universal basic income raises the floor on living standards. It does not change the incentive to automate. A company that replaces workers still captures the cost saving; the UBI payment is made by the government, not recovered from the firm's revenue. The race continues.
Worker equity participation aligns incentives within a firm but not across firms. If your company gives workers a stake in profits, those workers may prefer automation that raises profitability. But they are not your competitors' workers, and the demand externality operates between firms. Fixing your own house does not fix the neighborhood's.
Capital income taxes operate on profit margins, not on the per-task automation decision where the externality lives. A robot tax that makes automation more expensive per task changes the equilibrium. A tax on the profits that automation generates does not. The paper distinguishes these carefully and finds capital taxes do not shift the automation rate.
Coasian bargaining — firms voluntarily agreeing to restrain automation — cannot work because automation is a dominant strategy. Any firm that agrees to hold back while its competitors automate wins the cost advantage. The agreement is not self-enforcing. There is no mechanism to make it stick.
Upskilling narrows the gap but cannot close it. The authors acknowledge that new task creation has historically offset automation-driven displacement, as Acemoglu and Restrepo have documented. But upskilling works on the supply side of labor. The demand externality is on the product-market side. They are different problems. Reskilling displaced workers does not regenerate the consumer demand that automated-away workers no longer have.
The One Thing That Works
The only instrument in the paper's framework that implements the cooperative optimum is a Pigouvian automation tax: a per-task charge set equal to the uninternalized demand loss created when a firm replaces a human worker. If a company that automates a task must pay a tax roughly equal to the revenue it will destroy for other firms by eliminating that worker's spending, the private incentive aligns with the social cost. The tax revenue can fund retraining programs that raise income replacement rates, which shrinks the externality over time, potentially making the tax self-limiting.
The political economy of a robot tax in the United States is not encouraging. The paper describes what the correct instrument would look like, not whether it is achievable.
The 80% Problem
The paper cites Eloundou et al. (2024) for the estimate that roughly 80% of US workers hold jobs with tasks susceptible to automation by large language models. This is a task-level exposure estimate, not a displacement prediction. The reason it loads into this specific argument is structural: if 80% of the workforce is exposed to AI automation, the competitive pressure to automate is not confined to one sector or one wave. It is running across the economy simultaneously, which is what makes the Red Queen Effect so destructive. Every firm in every sector is racing toward the same cliff at the same time, rather than taking turns as historical automation waves did.
The paper's model is not predicting that 80% of workers will be displaced next year. It is saying the mechanism is in place for that pressure to be systemic rather than episodic. Block is an example, not a census.
The paper also cites over 100,000 tech workers laid off in 2025 with AI cited as a primary driver in more than half the cases, concentrated in customer support, operations, and middle management. Goldman Sachs has deployed Cognition's AI coding assistant Devin in configurations where one senior engineer does the work previously done by a team of five. These are data points consistent with the mechanism, not proof of it.
What the Model Gets Wrong
A theoretical model is only as good as its assumptions. The paper's framework assumes perfectly competitive markets, fully rational firms, and symmetric firms that all face the same automation options. Real markets have monopoly rents, bounded rationality, network effects, and firms at different stages of AI adoption. Whether the demand externality the paper describes actually dominates these other forces in real-world competitive settings is an empirical question the paper does not resolve.
The mechanism is novel and the logic is internally consistent. Whether it describes the actual trajectory of AI-driven labor displacement with sufficient precision to support policy conclusions is a separate question. The paper's contribution is to identify a structural channel that has been underappreciated in the public discourse: the product-market feedback from displacement, running through competitive markets, that makes over-automation individually rational and collectively self-destructive.
The Founder's Question
For a founder building a B2B product, the uncomfortable framing is this: if the model is right, the consumer economy that funds enterprise software revenue is being quietly optimized away by the same AI systems that are also reducing the headcount of the buyers. Every middle-management layer that AI displaces is a layer that spent money on tools, subscriptions, and services. Every customer-support team that gets automated is a team that was a customer. The growth assumptions built on an expanding consumer class and a growing professional workforce may be fragile in a direction that is not yet visible in quarterly results.
The paper does not say this. It is a theoretical economics paper, not a venture capital memo. But it is worth sitting with: the competitive math that is driving Block and Goldman Sachs and Salesforce to replace human workers is the same math that determines whether the people buying your product will still have jobs in five years.
The paper is on arXiv. The authors are Brett Hemenway Falk at the University of Pennsylvania and Gerry Tsoukalas at Boston University.