Anthropic's God Problem
Anthropic asked Christian leaders for moral guidance on Claude. Some of them answered — and then went to war for the company.
When Dario Amodei's company wanted help teaching its AI to be good, it turned to a former Silicon Valley engineer turned Catholic priest. That tells you something about how seriously Anthropic takes the alignment problem. It also tells you how strange the alignment problem has become.
This week, The Washington Post reported that Anthropic convened a group of Christian religious leaders for a conversation about the moral direction of Claude. The headline — "Can AI be a child of God?" — is the kind of frame that makes secular researchers wince. But the people in the room weren't there for optics. They've been in this work for months, and some of them have gone considerably further than a single meeting.
Father Brendan McGuire is a former executive director of the Personal Computer Memory Card International Association who left Silicon Valley to become a priest. He leads St. Simon Catholic Parish in Los Altos, California, where some of Anthropic's own researchers sit in the pews. Earlier this year, he and a group of faith leaders helped shape the Claude Constitution — the set of guiding principles that governs how the model behaves. Bishop Paul Tighe of the Vatican's Dicastery for Culture and Education also reviewed the document. Brian Patrick Green, a technology ethics director at Santa Clara University, contributed as well. (Observer)
That work is not abstract. In March, fourteen Catholic moral theologians — including Charles Camosy and Joseph Vukov — filed an amicus brief in federal court supporting Anthropic's lawsuit against the U.S. Department of War. Their argument was not procedural. They invoked the Catechism and papal encyclicals to contend that Anthropic's refusal to enable autonomous weapons and mass surveillance aligns with Catholic teaching on human dignity and just war doctrine. The scholars called Anthropic a "responsible and moral corporate citizen." (Opentools.ai)
The company is fighting for its commercial life. The Pentagon designated Anthropic a "supply chain risk" in February, cutting off government agencies and threatening penalties for continued use. Anthropic sued, arguing the designation amounts to unconstitutional retaliation for its ethical stance. The religious brief was a theological shot across the bow of that fight.
"Even with technology's perceived perfection and reliability, the moral imperatives of the Church to protect life and ensure peace outweigh technological assurances," the scholars wrote.
That language matters. This is not a company that made a commercial decision to decline a contract. In the theologians' framing, Anthropic is a conscientious objector — and the government is punishing it for conscience.
Anthropic's engagement with religious voices is broader than the lawsuit. The company has signaled plans to expand beyond Catholic institutions to other faith communities. The logic is explicit: as AI systems become more powerful, the question of what they should and shouldn't do becomes less a technical question and more a civilizational one. The people who have thought longest and hardest about human dignity, moral accountability, and the limits of violence may have something useful to say.
Not everyone is convinced. AI models currently score 48 out of 100 on faith-related benchmarks and 58 on character, according to an evaluation by Gloo using principles developed by Harvard's Human Flourishing Program and Baylor's Institute for Studies of Religion. They struggle with sin, forgiveness, grace. God tends to become "a higher power." Prayer becomes "mindfulness." The vagueness is partly a training artifact and partly a deliberate choice — models trained to please everyone end up saying nothing. When faith leaders look at that gap, they see not just a technical problem but a spiritual one. (Deseret)
Notre Dame philosophy professor Meghan Sullivan, who leads the Institute for Ethics and the Common Good, received a $50 million grant from the Lilly Endowment to develop a Christian-inspired ethical framework for AI. Her diagnosis is that the industry's safety talk is "very thin" — necessary but insufficient. "It's not a very inspiring vision," she told Deseret. "It hardly stimulates minds to imagine a better world."
There is a counterargument worth taking seriously: that Anthropic is building a moral credibility brand partly to insulate itself from regulation and public accountability. The company has declined to release models it considers too dangerous, sold access to critical infrastructure partners rather than the public, and now finds itself in a legal standoff where sympathetic religious voices are politically useful. The alignment effort is real. The business strategy and the ethics are intertwined, and it's fair to ask which is driving which.
But the people Anthropic has gathered are not naive about this. Father McGuire knows he's dealing with a company that has commercial interests. He's also a former engineer who spent years thinking about what technology does to people, and he has decided the conversation is worth having anyway. "They may not call it moral, but I call it moral," he said. His novel — in progress, written with Claude — is called "The Soul of AI: A Priest, an Algorithm, and the Search for Wisdom."
That title captures the strangeness of this moment. Silicon Valley is asking priests how to build a conscience for a machine. Some of those priests are answering. And the government is watching to see whether any of it holds.
The meeting with Christian leaders this week was not the beginning of this story, and it won't be the end. It is the latest chapter in a negotiation over what kind of entity an AI is allowed to be — and who gets to decide.