Three companies that spend most of their time competing are now sharing threat intelligence.
OpenAI, Anthropic, and Alphabet's Google have begun working together through the Frontier Model Forum, an industry nonprofit the three companies co-founded with Microsoft in 2023, to detect adversarial distillation attempts by China-based users, Bloomberg reported on April 6. The Forum was conceived when the three labs were still largely building separate products for separate markets.
The shift has been building for months. In February, OpenAI sent a memo to U.S. Congress accusing DeepSeek of using distillation and obfuscated routers to scrape its models and gain an unfair competitive advantage. U.S. officials have estimated that unauthorized distillation costs Silicon Valley labs billions of dollars in lost annual profit, Times Now reported.
The scale of the problem is now documented in detail. Anthropic published data showing that Chinese labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts. MiniMax accounted for roughly 13 million of those exchanges. Moonshot, the startup behind the Kimi chatbot, accounted for about 3.4 million. DeepSeek, which has drawn the most public attention, represented approximately 150,000 exchanges. Google separately detected over 100,000 prompts targeted at Gemini model extraction via adversarial distillation between 2025 and 2026.
The threat, according to Anthropic, extends beyond any single company or region. The company said in its blog post that distilled models lack safety guardrails for bioweapons development, a national security concern that crossed the line from intellectual property dispute into something harder to characterize as routine corporate competition.
Google and Anthropic declined to comment. OpenAI confirmed its participation in the Forum's information-sharing effort, but provided no further detail. The asymmetry in who would talk is itself a data point: the companies most exposed in the data were also the ones least willing to explain their response on the record.
The Forum is now being asked to do something different from what it was designed for. Whether it is becoming an enforcement mechanism rather than simply a clearinghouse for threat intelligence is a harder question than the companies are willing to answer on the record. MiniMax is a large language model company with 27.6 million monthly active users according to its own September 2025 financial results, though Anthropic cited a higher figure of 100 million plus in its blog post. Moonshot is a serious research organization. DeepSeek made its reputation with efficiency claims that its distillation accusers dispute. These are not marginal players. They are the competitive frontier.
What the Forum is becoming is an open question. Sharing threat intelligence about model extraction is a different institution from setting voluntary safety benchmarks or coordinating research. The first function is defensive and transactional. The second would require agreeing on what constitutes safe development, which has historically been harder to coordinate than detecting whether a model is being scraped.
That coordination gap is where the real story sits. The 16 million exchanges are a number. The question is what happens when the three companies that now have to agree on what to do about it are also the three companies competing hardest for the same customers, same researchers, and same compute resources. Sharing threat intelligence is easy. Sharing a definition of acceptable development is not.
Notebook: The national security framing around bioweapons guardrails is a notable escalation in how frontier labs are presenting the distillation problem. This language did not appear in earlier anti-distillation work. Whether that reflects a genuine threat evolution or a rhetorical choice designed to move the issue out of the IP column and into something that commands more government attention is worth watching.