Three unrelated companies hit the same enforcement dead end with Anthropic in a nine-day window, surfacing a structural gap in how the company handles account disputes at scale. The cases span a Latin American fintech processing international transfers, the creator of a developer tool built on Claude, and a platform that incorrectly flagged minors. Each company was caught by automated systems, each pointed to the same Google Form with no timeline, no case number, and no person to call. Anthropic confirmed at least two of the three as false positives.
The fintech case shows what that looks like in practice. Belo, an Argentina-based company that processes international transfers and prepaid cards, had more than 60 Claude accounts suspended for more than 15 hours. The company's chief technology officer, Pato Molina, posted publicly on X on April 18: access was restored after public outcry, and Anthropic confirmed the suspension was a mistake. The resolution required going public. There was no internal path to escalate faster.
The same enforcement process appears in the other incidents. OpenClaw's creator had his account suspended on April 10 and reinstated after public pressure, as TechCrunch documented. Multiple users were incorrectly flagged as under 18 and had their accounts suspended, according to reports documented by MediaNama. In each case, Anthropic cited a high volume of usage policy violations without naming the specific rule. Anthropic's own help site confirms the Google Form is the official appeals channel and notes response times are longer than usual due to a recent launch, with no commitment to a specific window.
The clustering in a short window is what makes the pattern worth covering now. When a company's Claude accounts are suspended, its work stops. The official appeal process offers no service-level agreement, no escalation path, and no person to call. Nearly half of enterprises already say vendor lock-in has hindered their ability to adopt better AI tools. A false positive that takes 15 hours to resolve adds a concrete cost to that dependency. Anthropic's Managed Agents product explicitly stores session data in Anthropic-managed infrastructure, a design choice that deepens lock-in for customers who build workflows on top of it.
What happens next is a business question more than a technical one. Companies caught in enforcement sweeps can push for contractual protections, service-level commitments, named contacts for appeals, and timelines for resolution, or they can accept the current reality and distribute their AI vendor exposure across multiple providers, hoping the next flag lands somewhere else.
Anthropic did not respond to a request for comment.