When Vas Narasimhan joined Anthropic's board last week, the news landed as a familiar kind of story: another pharma executive betting on AI. The press release called it a milestone. The headlines called it a win for Anthropic's healthcare ambitions.
What the announcement actually said was stranger. Buried in its third paragraph, Narasimhan was not appointed by Anthropic's shareholders, not recruited by its executives, and not selected by its investors. He was appointed by the company's Long-Term Benefit Trust, an independent body whose members have no financial stake in Anthropic whatsoever. They profit only if the AI lab succeeds in ways their own legal documents do not define.
With this single appointment, that trust now controls a majority of Anthropic's board seats for the first time in the company's history.
Anthropic, the AI safety company behind Claude, is incorporated in Delaware as a Public Benefit Corporation, a legal structure that allows boards to weigh public interest alongside stockholder returns. The Long-Term Benefit Trust is Anthropic's mechanism for keeping that balance genuine rather than cosmetic: five trustees, selected for expertise in AI safety and public policy, with no economic interest in the company's outcome. They appoint directors who answer to the trust's mission, not to shareholders.
That governance model is now heading toward a public offering. Anthropic is reportedly targeting October 2026 for an IPO that could raise more than $60 billion, according to Bloomberg. The company hired Wilson Sonsini Goodrich & Rosati in December 2025 to begin preparation for an IPO potentially as early as 2026, the Financial Times reported. If the offering proceeds, it will be the first time a Public Benefit Corporation with this kind of trust-based board structure goes to public markets at this scale.
The timeline matters. As recently as late 2025, the Long-Term Benefit Trust had appointed only one of its three available board seats, according to the Longterm Wiki, a research site tracking AI governance structures. Narasimhan's appointment, along with Chris Liddell, a former Microsoft executive who joined the board in February, filled the remaining two slots in a matter of months, crossing the majority threshold roughly a year ahead of what the original governance design anticipated.
Why the acceleration? Anthropic did not respond to questions about the timing. The trust's chair, Neil "Buddy" Shah, said only that Narasimhan "has spent his career stewarding breakthrough science responsibly, exactly the perspective we are excited to have on the board as we develop consequential technology." Narasimhan himself said he joined because "speed alone isn't the goal" in healthcare AI.
The question of what the trust actually controls is where the story gets harder to tell cleanly. Anthropic has never published the full Trust Agreement governing the LTBT's powers. What is public, based on Anthropic's own 2023 disclosure and an analysis by Harvard Law's Corporate Governance program, is that the trust holds a special class of stock with the right to appoint and remove directors. What is not public: the specific thresholds at which stockholders can override or amend the trust's powers. Harvard Law's forum noted in 2023 that the trust's arrangements "can be changed without the consent of the Trustees if sufficiently large supermajorities of the stockholders agree," without specifying what "sufficiently large" means.
A Less Wrong analysis published in 2024 put the concern more directly: "Anthropic has not publicly demonstrated that the Trust would be able to actually do anything that stockholders do not like." Anthropic declined to comment on that analysis.
The practical implication is not abstract. If a future Anthropic board, appointed by a trust with no equity exposure, made a decision that major stockholders believed threatened their investment, those stockholders could theoretically move to override the trust's authority. Whether they could succeed depends on a document nobody outside Anthropic has read.
This is the governance structure that will govern one of the world's most powerful AI labs if the IPO proceeds. Institutional investors who buy shares will own equity in a company where the people controlling the board have no personal financial upside in its success. The people providing the capital have no formal role in selecting those controllers. And the legal document that defines the boundary between those two groups is not public.
Anthropic's framing is that this structure is a feature, not a bug, a way to insulate consequential AI decisions from short-term market pressure. Its critics' framing is that it may be governance theater: a trust with theoretical power that dissolves the moment it conflicts with what investors actually want.
What happens in the months before a potential filing will be the real signal: whether the trust adds board members who reflect its mission independence, or whether the composition shifts toward candidates acceptable to whatever investors are underwriting the $60 billion target.