The Four-Day AI Director: How the White House Burned a Bridge to Anthropic and Left a Government Vacancy
The Center for AI Standards and Innovation is supposed to be America's hub for evaluating whether AI systems are safe. Its inaugural director lasted until lunch.

Collin Burns gave up his Anthropic equity, packed up, and moved across the country for a government job he thought would let him shape how the United States evaluates frontier AI systems. He lasted four days.
The White House pushed Burns out of the Center for AI Standards and Innovation on Thursday, just four days after he started as the Commerce Department agency's new director. According to four people familiar with the situation, the White House had not been briefed on Burns's appointment before it was made — meaning the Commerce Department selected a former Anthropic researcher to lead the government's primary AI evaluation body without alerting the President's own staff. When the appointment became known, the reaction was swift and negative. Burns was replaced by Chris Fall, a career government scientist who ran the Department of Energy's Office of Science under the first Trump administration. Burns was asked to resign that afternoon.
The episode is the personnel consequence of a running feud between the Trump administration and Anthropic that has been building for months. Secretary of War Pete Hegseth declared Anthropic a supply chain risk to national security after the company declined the Pentagon's request for unrestricted access to its most powerful model, Mythos. President Trump called the company "RADICAL LEFT" and "WOKE" on social media. Yet the National Security Agency is currently running Anthropic's Mythos Preview model in production, according to people familiar with the arrangement — a quiet exception to the administration's public posture that nobody in the White House appears willing to explain.
Anthropic has restricted Mythos to roughly 40 organizations, citing the model's potentially dangerous computer hacking capabilities. Federal researchers at CAISI have been conducting red-teaming evaluations of the model to assess national security risks. The testing was underway before Burns arrived, and it is unclear whether his departure affects those evaluations.
Burns's background made him unusual for a government technology role: he spent 2023 at OpenAI before joining Anthropic in late 2024, according to his personal website, making him a rare bridge between the two most prominent AI safety-focused labs and a federal body designed to evaluate their work. He co-authored the Weak-to-Strong Generalization paper, one of the most cited AI safety research publications of 2023. Miles Brundage, a former OpenAI policy researcher who now runs the nonprofit Gladstone AI, noted that Burns's appointment was "the key thing" — CAISI finally getting a leader with technical expertise — and described it as one additional signal of a thaw in the administration's relationship with frontier AI labs.
That signal turned out to be noise. The White House was not briefed on the appointment, which sources described as an omission significant enough to prompt direct intervention. Dean Ball, a former Trump administration AI adviser, wrote that Burns had been "rewarded by his country with a punch in the face" after giving up equity and relocating.
The episode illuminates a structural tension the government has not resolved: the people most qualified to evaluate frontier AI systems are almost all employed by the companies whose systems the government needs to evaluate. Putting Burns in charge of the government's primary AI evaluation body made logical sense. It also made him politically untenable the moment the White House found out.
CAISI was created in November 2023 under Biden as the AI Safety Institute, renamed and repositioned under Trump in June 2025 with a mandate that emphasized innovation alongside security. Secretary of Commerce Howard Lutnick said at the time that "censorship and regulations" had been "used under the guise of national security" to limit innovators. The rebranding shifted the organization's emphasis from safety evaluation toward commercial AI standards.
Fall, the replacement, brings deep government science credentials but no documented background in AI safety research. CAISI's mission — evaluating frontier models for national security risks, coordinating with the NSA and Department of Energy on AI capability assessments — now rests with someone whose expertise is managing large federal research organizations, not analyzing the properties of large language models.
The practical consequence may be a continuing gap between the government's public posture toward Anthropic and its operational dependence on the company's technology. The NSA is running Mythos in production. CAISI's red-teaming evaluations of the same model appear to be ongoing. The agency that needs technical expertise to do this work now leads with a career bureaucrat rather than someone who spent the last two years inside a frontier lab. The White House's briefing failure ensured that gap will persist.
Burns declined to comment. The Commerce Department said Fall "brings the scientific leadership needed to ensure America leads the world in evaluating frontier AI models." The department did not answer questions about why Burns was pushed out.





