Collin Burns spent four days in Washington.
The former Anthropic researcher, who had left a lucrative lab job and relocated across the country to serve in government, was pushed out of his role as director of the Center for AI Standards and Innovation last Thursday, according to four people with knowledge of the situation. He had started the previous Monday. Four days from offer to exit.
Burns was not given a reason for his ouster, according to one of the people, who spoke on the condition of anonymity to describe private conversations. The White House was not briefed on his selection before Commerce made the offer — a breach of standard process that contributed to his rapid removal, two of the people said. The Washington Post first reported Burns had been picked for the role on Thursday afternoon. By evening, Commerce announced a different hire.
Chris Fall, a scientist who served as director of the Office of Science at the Department of Energy during the first Trump administration, replaced Burns. Commerce Secretary Howard Lutnick said in a statement that Fall brings the scientific leadership needed to ensure America leads the world in evaluating frontier AI models.
The episode exposes a contradiction at the heart of the administration’s AI governance strategy. The government is simultaneously litigating to maintain Anthropic’s designation as a national security risk — while the very agency tasked with evaluating frontier AI models has been granted access to the company’s most powerful system.
Anthropic’s Mythos model has capabilities the company has described as potentially dangerous in the wrong hands. It has limited access to roughly 40 organizations, including the Commerce Department’s CAISI and the National Security Agency. The Cybersecurity and Infrastructure Security Agency, the government’s primary digital defense arm, does not yet have access, according to Computerworld, which cited Axios reporting.
Dean Ball, a former AI adviser in the Trump administration, wrote on social media that Burns had given up valuable Anthropic stock and moved across the country to take the government position. He called the outcome a punch in the face.
The administration designated Anthropic a supply chain risk in early March, blocking the company from Pentagon contracts. Anthropic has filed two lawsuits challenging the designation, arguing it amounts to unlawful retaliation for the company’s stance on AI safety guardrails. A federal appeals court declined to pause the designation last week, though a California judge had previously blocked one of the Pentagon’s orders, saying officials appeared to have retaliated against Anthropic for its views on AI safety.
Burns joined OpenAI in 2023 after leaving a PhD program in computer science at the University of California, Berkeley. He moved to Anthropic in late 2024. His personal website lists several publications, including a paper on weak-to-strong generalization that was accepted as an oral presentation at ICML 2024.
CAISI was established under President Joe Biden as the U.S. AI Safety Institute before the Trump administration renamed and restructured it. Lutnick said at the time that the new body would focus on demonstrable risks including cybersecurity and biosecurity, and represent U.S. interests in international AI standards bodies.
What happens to the revolving door now is an open question. The government needs people who understand frontier AI to govern it. The people who understand frontier AI tend to work at the labs Washington is simultaneously declaring too dangerous to contract with.