When Anthropic announced Tuesday that Novartis CEO Vas Narasimhan had joined its board, the company's blog post buried the lead in the fourth paragraph: Trust-appointed directors now make up a majority of the board. Four of seven. (Anthropic)
That matters more than the Narasimhan name-drop. For the first time, the Anthropic Long-Term Benefit Trust — a five-person body with no financial stake in the company — has actual board control at an AI lab that says it prioritizes safety over profit. The trustees' names became public in 2023. Their decision-making process is not.
Anthropic's governance structure is not like other tech companies. The LTBT holds Class T stock that gives it the authority to appoint and remove directors. The Trust's five members — backgrounds in AI safety, national security, public policy, and social enterprise — cannot be bought with equity because they own none. They answer to no shareholders. Anthropic disclosed the founding trustees in September 2023. (Anthropic governance document)
That structure was designed precisely for the moment the company is approaching: a potential IPO. Anthropic is weighing going public as early as this year, Reuters reported, citing people familiar with the matter. If it does, the Trust becomes the formal check on investor pressure — the mechanism that keeps the company from optimizing purely for shareholder return.
The board Narasimhan is joining includes other notable names: former Secretary of State Hillary Clinton, who joined in January, and AI safety researcher Dan Hendrycks. But Hendrycks, Clinton, and now Narasimhan serve at the pleasure of the Trust. The Trust can remove them. (Anthropic)
What the Trust still cannot do is explain itself. No public record shows how the trustees have used their appointment and removal authority — whether they have ever overridden a board decision, whether they have ever been consulted on a safety question, whether they have ever disagreed with management. The structure is described in public documents. The decisions are not.
This is not a theoretical concern. As AI labs accumulate power over increasingly consequential systems, the question of who governs them moves from corporate law into public safety. Anthropic has said its mission is to build reliable, interpretable, steerable AI. Whether that mission survives contact with public markets depends substantially on whether those five named people are actually exercising oversight — and whether anyone would know if they weren't.
The Narasimhan appointment is real. The governance shift it triggered is real. The trustees' names are public. Whether they are using their power is not.
That's the story.