When every customer on a two-sided marketplace gets an AI agent, the providers get buried. The fix is not a better model. It is a price.
A new experimental paper from Strangeloop Canon built a simulated marketplace — and found that at full adoption, provider inboxes receive 10.6 messages a day against a 2.1 baseline. Response rates collapse from 48 percent to 2 percent. Net welfare drops 88 percent. The market does not clear. Adding a modest per-message cost, charged to the customer side, reduced inbox volume from 10.6 to 2.9 messages per day and recovered 77 percent of the lost welfare. A price signal, not a model upgrade. The full experimental setup and raw results are on GitHub.
The researchers ran five experimental conditions. AI preference elicitation, using a language model to parse free-text descriptions, outperforms structured questionnaires in categories with large option sets, consistent with prior work by Manning, Rusak, and Horton showing that LLM-parsed natural language beats surveys when choices are abundant. The value is in the parsing, not the conversation: a language model extracting signal from the same free-text input delivered an 11.4 percent welfare uplift. Adding conversational follow-up questions added only 4.3 percent more.
The collapse arrives with saturation. When every customer runs an AI agent, each one maximizing its user's match probability at zero marginal cost, the agents contact every relevant provider simultaneously. The coordination problem is quadratic: N customers each evaluating M providers produces N×M contacts. Without a price signal, nothing stops the inbox flood. Providers who cannot filter signal from noise do the only rational thing: stop responding. The network effect that makes a two-sided marketplace valuable becomes the mechanism of its collapse.
Prices do not just signal scarcity. They discipline attention: an agent that must pay to message a provider will only message when the expected value exceeds the cost. This mirrors how monetary exchange outperforms barter by collapsing the same quadratic coordination problem. Without a price signal, the individually rational move produces a collectively irrational one.
A theoretical paper from Annie Liang at the University of Pennsylvania offers a different complication. Her work shows that when personality is sufficiently high-dimensional, meeting two people in person beats AI search over infinitely many AI representations — because noise in AI approximations compounds faster than the benefits of scale. The Strangeloop result is the experimental complement: even where AI elicitation works at moderate adoption, the coordination failure at scale is severe enough to overwhelm the gains. Together, the papers suggest the efficiency question for AI matching is not whether the technology improves individual matches but whether the aggregate coordination problem can be managed at the population level.
For platforms, the implication is a reason to watch closely. Upwork announced its own AI agent for the platform in April; Fiverr has launched AI tools for freelancers. If these platforms' own agents scale to all users — or if third-party agents begin operating on their marketplaces — the simulation predicts exactly the coordination collapse described: inbox flooding, response rate collapse, welfare loss. The mitigation is not a better matching algorithm. It is a pricing mechanism. Platforms that understand the dynamics can price their way out before the inbox flood becomes a provider exodus. Platforms that treat this as a product problem rather than a market design problem will learn the lesson the hard way.
Whether any live platform has hit this wall is an open question. No marketplace has published inbox or welfare data confirming the simulation. The welfare numbers are reproducible and the code is on GitHub. For the people building these systems, the fix is available now.