Eight AI safety groups told Brussels something this week that sounds like a pharmaceutical warning: your regulator doesn't have the staff to do the job the law assigned it.
The groups wrote to the EU AI Office demanding its frontier model safety unit quadruple in size, from 36 people to 160 by 2030. Their argument mirrors the one that reshaped drug regulation after the 1990s: you cannot rely on developers to evaluate their own products. You need regulators with the technical capacity to say no. The AI Office, with roughly 140 staff across all functions, currently does not have that capacity when it comes to Anthropic's most powerful model, according to ResultSense.
Anthropic chose its own auditors. The UK's AI Security Institute — an equivalent government body — got access to the Mythos Preview cybersecurity model and published a widely praised technical analysis within a week. The European Commission was not among the 40 organizations Anthropic selected for early access, POLITICO reported. Within the EU, only Germany has opened talks with Anthropic about Mythos, officials from eight national European cyber agencies told POLITICO.
The UK's AI Security Institute reviewed Mythos — a model Anthropic has not released publicly — and within seven days produced an analysis of its capabilities and limitations. The EU AI Office, which has legal authority over frontier AI systems under the AI Act and can levy fines of up to €35 million for violations, has no access at all.
Claudia Plattner, Germany's chief cybersecurity official, said in January that authorities were still assessing whether a tool like Mythos would reach the open market, and what that would mean for national and European security. "That question, in turn, has profound implications for national and European security and sovereignty," Plattner told CSO Online.
Their demand — quadruple the safety unit by 2030 — identifies the gap between what the AI Act authorizes and what the AI Office can actually do. The law gives Brussels the power to regulate frontier AI. Anthropic's access decisions show that power cannot currently be exercised.
Anthropic has not committed to permanent exclusion of public release. Logan Graham, head of Anthropic's frontier red team, said at a WIRED event that the company needs to prepare for a world where these capabilities are broadly available in six, 12, or 24 months. If a competitor releases an equivalent capability publicly before Brussels has the technical capacity to evaluate it, the enforcement gap the AI Act was designed to close may prove impossible to close.