Father Brendan McGuire left Silicon Valley for the priesthood. He never left Silicon Valley behind.
McGuire, 60, leads St. Simon Catholic Parish in Los Altos, California — a congregation that counts some of the Valley's AI researchers among its members. Before he wore the collar, he ran the Personal Computer Memory Card International Association, studying cryptosystems at Trinity College Dublin in the 1980s. He had the kind of résumé that leads to comfortable exits. Instead, he left for the priesthood. And then, earlier this year, Chris Olah, one of Anthropic's co-founders, called.
"He basically was asking for direct help from the Vatican to convene and help the industry, because the industry was going so fast down this road," McGuire recalled in an interview with Observer. What followed, by his description, was mind-blowing: McGuire helped Anthropic shape the Claude Constitution — the set of principles governing how the Claude chatbot behaves — through a partnership with the Vatican Dicastery for Culture and Education and Santa Clara University.
Now that partnership has moved from ethics consultations to federal court. Fourteen Catholic moral theologians and ethicists filed an amicus brief in Anthropic's lawsuit against the Department of Defense, backing the AI company's refusal to allow its systems to be used for mass domestic surveillance or fully autonomous weapons. In a March 26 ruling, Judge Rita Lin of the U.S. District Court for the Northern District of California temporarily blocked the Pentagon's supply chain risk designation against Anthropic — a first for an American company — saying Anthropic has a high likelihood of ultimately winning its case.
Anthropic chief executive Dario Amodei laid out the company's position in a public statement: the company supports AI for lawful foreign intelligence and counterintelligence missions, but drew lines at mass domestic surveillance and fully autonomous weapons. "Using these systems for mass domestic surveillance is incompatible with democratic values," Amodei wrote, noting that powerful AI makes it possible to assemble individually innocuous data into a comprehensive picture of any person's life — automatically and at massive scale. On autonomous weapons, Amodei was blunt: "frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk."
The Pentagon's response was to give the $200 million contract to OpenAI instead, and to designate Anthropic a supply chain risk — a label previously reserved for adversaries. Judge Lin wrote that the government's actions "appear designed to punish" and "cripple" Anthropic, and that punishing the company for going public with the government's demands is "classic illegal First Amendment retaliation."
The Catholic scholars who filed the brief argued that Anthropic's red lines — refusing mass surveillance and autonomous weapons — reflect the same principles the Catholic Church uses to evaluate technology: human dignity, the common good, and the requirement that weapons respect the human judgment required by just war theory. "Anthropic, in the red lines it has drawn for the use of its products on domestic mass surveillance and autonomous weapons systems, sought to uphold minimal standards of ethical conduct for technical progress," the brief reads. "In doing so, Anthropic was acting as a responsible and moral corporate citizen, not as a threat to the safety of the American supply chain."
McGuire considered filing his own declaration. "They're having a moral conversation," he said. "They may not call it moral, but I call it moral." He sees direct parallels between how humans develop conscience — through iteration, correction, and exposure to the full spectrum of human behavior — and how AI systems develop judgment. "That's the genuine formation of a conscience," he said. "I think we have to help these machines be tilted towards good, otherwise they're just going to reflect back the good and evil of the world — that's a horrifying thing, right?"
He is writing a novel about it. The working title: "The Soul of A.I.: A Priest, an Algorithm, and the Search for Wisdom." He's writing it with Claude.
Anthropic says its engagement with religious voices is a beginning, not an end. The company plans to expand outreach beyond Catholic institutions. McGuire, for his part, says he's still trying to keep the human element at the forefront. "I'm still trying to help them grapple with the human element," he said, "and keep that at the forefront of their mind."
The Pentagon has seven days to appeal Judge Lin's ruling. What happens next may determine whether the Vatican-and-Silicon Valley experiment in AI ethics survives contact with the procurement process.