The counterintuitive finding: giving every user a better AI agent for outreach makes matching in two-sided marketplaces worse, not better, until you add a price tag. In a publicly auditable simulation, a researcher team from Strange Loop Canon showed that at full AI adoption, provider response rates collapse from 48 percent to 2 percent and net welfare drops 88 percent. The fix is not a better model. It is a 5-cent charge per outbound message.
The number is three months old. What is new is the mechanism, and the gap between what the code shows and what the platforms are doing.
The simulation models a home-services marketplace: 80 customers and 40 providers, each side running AI agents that rank and message the other. At zero AI adoption, each provider receives about two messages a day and responds to roughly half of them. At 100 percent adoption, the inbox count jumps past 10 messages per day, and the response rate falls to 2 percent. The agents have not gotten worse at matching. They have generated more contact attempts than the provider side can absorb.
What changed is coordination. Each AI agent, operating independently against the same pool of providers, overwhelms them. Providers adapt by raising their acceptance threshold, effectively screening out everyone. The market stops clearing. The researchers call it the tragedy of the agentic commons, a deliberate echo of the 19th-century economics thought experiment describing what happens when everyone uses a shared resource without coordination.
The fix, according to the code, is almost disappointingly simple: charge for attention. Each outbound message costs 5 cents, and each customer has a 50-cent outreach budget. Customers send a limited number of messages and stop. Provider inboxes settle back to about three messages per day, manageable and worth answering. Welfare recovers 77 percent of what was lost.
What the code audit reveals, and what the blog post does not emphasize, is that the pricing fix does not improve the matching algorithm. It prices away the coordination problem. The 5-cent toll is a coordination tax, not a better ranking. In the simulation, this works because the toll forces customers to value each outreach attempt. Without it, every agent optimizes independently and floods the other side of the market.
The platforms have not implemented it. Upwork's AI writing tools for cover letters and job matching are free. Thumbtack's Pro AI product has no per-message charge. Bumble has floated AI matchmakers talking to each other on users' behalf but has not shipped the feature. None of these platforms appears to have added a coordination price to their AI outreach.
The economic incentive to add the toll is also weak. A per-message fee creates friction, and friction reduces the engagement metrics that justify product investment. The path of least resistance is to deploy AI agents and hope the market adapts. The Strange Loop Canon simulation predicts the market will not adapt without the price scaffolding.
One reason for skepticism about whether the collapse actually happens: Annie Liang, an economist at Northwestern University, published a theoretical paper in January 2025 showing that when personality is sufficiently high-dimensional, meeting two people in person beats searching over infinite AI representations. The noise in AI approximations, her model shows, compounds faster than the gains from scale. If her result generalizes, the coordination failure the simulation predicts may be secondary to a deeper approximation problem.
Whether the collapse the model predicts actually materializes in production is the open question. The platforms have not published provider response rate data before and after their AI rollouts. The congestion problem the code predicts has not been confirmed in real marketplaces.
What the code offers is a reproducible result, a public methodology, and a stress test for any platform considering handing AI agents the keys to customer outreach. The stress test says: add price scaffolding before you deploy, or watch your response rates collapse. The platforms are not running the test. They are running the production system instead.