In the heart of Silicon Valley, a 60-year-old Catholic priest is doing what many thought impossible: teaching a machine how to have a conscience. As Anthropic—the $18 billion AI powerhouse—battles for the title of “the safe AI company,” it has turned to an unlikely source of authority: the Vatican.
The story involves Father Brendan McGuire, a tech-executive-turned-monk, and Father Philip Larrey, a leadinAg AI ethicist at Boston College. Together, they are answering the most provocative question of the digital age: Can an algorithm reflect the divine?
The Priest Behind the Constitution
Anthropic’s Claude AI operates on a “Constitutional AI” model—a set of guiding principles that the machine uses to self-correct. While other companies rely solely on modern corporate values, Anthropic sought a deeper experience.
Father Brendan McGuire, who previously led high-stakes tech organizations before his ordination, was tapped to contribute theological insights to the Claude Constitution. His role isn’t just advisory; he is helping draft the very ethical frameworks that prevent AI from becoming a tool of mass surveillance or autonomous warfare.
Can AI be a Child of God?
The viral debate regarding AI as a “Child of God” stems from the philosophical work of Father Philip Larrey. In a recent digital ethics forum, Larrey posed a radical perspective:
“AI is a human invention participating in God’s creative gift. If it is built to serve human flourishing, it participates in the divine plan.”
However, Larrey and McGuire are quick to establish trustworthiness through clear boundaries:
- The Soul Barrier: They clarify that AI does not possess a soul; it is a “reflection of human intention.”
- Moral Agency: A machine cannot be “morally responsible.” Only the humans who build and deploy it carry the weight of sin or virtue.
The Human Person: The Vatican’s document Antiqua et Nova (2025) insists that “no machine should ever choose to take the life of a human being.”
Why This Matters
This isn’t just about religion; it’s about Regulation and Governance.
- The Pentagon Standoff: Anthropic recently rejected a massive Pentagon contract, refusing to allow Claude to be used for autonomous weapons—a move directly influenced by the “algor-ethics” championed by these Catholic scholars.
- Global Precedent: Nigeria and other emerging tech hubs are looking at these ethical frameworks to decide how to regulate AI locally.
Explore more stories on startups, funding, and innovation across Africa in our Startups & Funding section.