Anthropic, OpenAI Talk Safety. Their Headcounts Don’t - Bloomberg.com
Anthropic and OpenAI have spent the past several months in a very public argument about who takes AI safety more seriously.

Anthropic and OpenAI have spent the past several months in a very public argument about who takes AI safety more seriously. The rhetoric is loud. What both companies have actually done with that rhetoric tells a more complicated story — one where the safety brand and the structural commitment are not the same thing.
In late February 2026, Anthropic dropped its signature safety pledge: the promise to pause model training if capabilities outpaced safeguards. The company said it would keep training unless it held a “significant lead” over competitors — a standard its leadership quietly admitted almost no other AI lab meets. The timing was not incidental. Defense Secretary Pete Hegseth had reportedly threatened to pull a $200 million Pentagon contract and blacklist Anthropic from government work unless it relaxed its restrictions on military use. A top safety leader resigned from Anthropic the week before the policy change, sources told the Wall Street Journal.
The week after Anthropic announced it was dropping its most concrete safety commitment, the company unveiled the Anthropic Institute — a new think tank bringing three existing teams under co-founder Jack Clark, now also Anthropic’s Head of Public Benefit. The Institute consolidates the Frontier Red Team, which stress-tests Claude against vulnerabilities in real codebases like Firefox; the Societal Impacts team, which has produced some of the most cited research on which jobs AI is actually displacing (nine people); and the Economic Research team, which publishes the company’s Economic Index. New hires include Matt Botvinick from Google DeepMind, former OpenAI researcher Zoë Hitzig, and economist Anton Korinek. Jack Clark has said he expects the Institute’s staff to double every year.
Anthropic employs roughly 4,585 people as of late February 2026, according to Tracxn data. OpenAI’s headcount is harder to pin down: employee databases including TrueUp and Unify have estimated the company at roughly 2,500 to 3,000 people, while other estimates have ranged higher. What’s clear is that both companies are still small relative to the tasks they claim to be managing — and both have, in the same six-week window, moved in the same direction on safety commitments.
OpenAI’s move was quieter. When the company filed its latest IRS Form 990 in November 2025 (covering financial year 2024), it removed the word “safely” from its mission statement. Every prior filing had included it. The new mission: “to ensure that artificial general intelligence benefits all of humanity.” No “safely.” The change came as OpenAI restructured from a nonprofit-controlled entity into a conventional for-profit company, ceding 74% of board control to investors including Microsoft, which now owns approximately 27% of the company. Scholars of nonprofit accountability flagged the change as a structural signal, not just a rhetorical one. Alnoor Ebrahim, a professor at Tufts’ Fletcher School who first noted the shift, wrote that OpenAI’s overhaul was “a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm.”
OpenAI has also made targeted safety hires. In February, it brought on Dylan Scandinaro from Anthropic to head preparedness — advertising the role at up to $555,000 in base salary. Scandinaro posted publicly that AI is advancing rapidly and the risks of extreme harm are real. It was a deliberate signal: OpenAI could find a senior safety executive at the company whose entire brand is built on safety being the right place to look.
The pattern in both cases is the same. Both companies are scaling fast, both have made moves in the same direction on safety rhetoric, and both have simultaneously made targeted hires that project seriousness about risk. The Anthropic Institute’s Societal Impacts team — nine people — produced the most detailed study to date on AI-driven job displacement. Anthropic’s total workforce is approaching 5,000. The company that built its identity on structural safety commitments has a dedicated safety research operation that fits inside a mid-sized startup team. And it just dropped the one commitment that was hardest to dismiss as theater.
What the headcount data shows is not that either company is lying about caring about safety. It shows that caring — the brand, the rhetoric, the institutional structure — is something both companies are finding harder to operationalize as they scale toward AGI. The Anthropic Institute may double its staff every year. It started with teams of dozens.
Sources: CNN Business (https://www.cnn.com/2026/02/25/tech/anthropic-safety-policy-change), Fortune (https://fortune.com/2026/02/23/openai-mission-statement-changed-restructuring-forprofit-business/), Anthropic Institute announcement (https://www.anthropic.com/news/the-anthropic-institute), The Verge (https://www.theverge.com/ai-artificial-intelligence/892478/anthropic-institute-think-tank-claude-pentagon-jack-clark), Morning Brew (https://www.morningbrew.com/stories/2026/02/26/anthropic-drops-core-safety-pledge), Tracxn (https://tracxn.com/d/companies/anthropic)

