In February 2020, something unusual happened in Rome. Representatives from Microsoft and IBM sat down with the Vatican's Pontifical Academy for Life, the UN's Food and Agriculture Organization, and the Italian government — and together they signed a document about the ethics of artificial intelligence.
No product was launched. No profit was made. There were no stock price implications. It was simply a group of very different institutions agreeing, in writing, that AI needed to serve humanity rather than replace it.
You probably didn't hear much about it. And that's exactly the problem.
The Rome Call for AI Ethics is one of the most underreported developments in the global AI governance story. At a moment when everyone is debating AI regulation — the EU AI Act, the Biden executive order, the G7 frameworks — the Vatican quietly got there first. And the document they produced is more practical, more readable, and more ambitious than most people realize.
How It Happened
The story starts with Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life — the Vatican body responsible for questions of bioethics and human dignity. By the late 2010s, Paglia had become increasingly convinced that artificial intelligence posed the same kind of fundamental challenge to human dignity that bioethics had always wrestled with.
The Academy began hosting conversations about AI ethics as early as 2016, bringing scientists, philosophers, and technologists to Rome. What emerged was a recognition that the AI debate was happening almost entirely inside Silicon Valley — disconnected from the moral traditions and human concerns that the rest of the world actually cared about.
The answer was a document designed to be signed, not just read. They called the concept "algorethics": the ethical governance of algorithms from the ground up, built into AI design rather than bolted on afterward. The document was signed on February 28, 2020 — just weeks before the world shut down for a pandemic that would, ironically, accelerate AI adoption faster than almost anyone had predicted.
The Six Principles — What They Actually Mean
The Rome Call lays out six core principles. They're short — almost deceptively so. But each one is doing more work than it first appears.
Who Has Signed It — And Why That's Interesting
The original signatories in 2020 were the Vatican, Microsoft, IBM, the FAO, and the Italian government. The coalition has grown steadily, and the growth pattern tells an interesting story.
In 2021, the Vatican established the RenAIssance Foundation to steward the Rome Call. Father Paolo Benanti — a Franciscan friar, engineer, and ethics professor — became its scientific director, and has since been appointed to the UN Secretary-General's Advisory Body on Artificial Intelligence. In January 2023, Jewish and Muslim religious leaders signed alongside the Vatican. An initiative that began as a Catholic document had become genuinely interfaith.
The most remarkable moment came in July 2024, when the coalition gathered in Hiroshima, Japan for a conference called "AI Ethics for Peace." Representatives of eleven world religions — Christianity, Judaism, Islam, Buddhism, Hinduism, Zoroastrianism, Bahá'í, and more — signed together. Pope Francis sent a message that included one of his most direct statements on AI:
Choosing Hiroshima was not an accident. The city is the most powerful available symbol of what happens when humanity develops transformative technology without adequate ethical reflection.
What Makes It Different from Other Frameworks
By 2020, AI ethics frameworks were hardly in short supply. What made the Rome Call different? Three things stand out.
First, the framing. Most tech-industry frameworks are written from the perspective of developers asking "how do we build this responsibly?" The Rome Call is written from the perspective of humanity asking "what do we actually need from this technology?" That leads to different emphases — particularly around dignity and the protection of the vulnerable.
Second, the coalition model. Rather than publishing a document and declaring victory, the Pontifical Academy built a mechanism for ongoing commitment. Signing the Rome Call is a public pledge that invites accountability.
Third, the theological depth. When the Rome Call frames AI ethics in terms of human dignity, it's drawing on two thousand years of philosophical reflection — not a consensus formed in the last five years of Silicon Valley soul-searching.
The Criticism: Does Any of This Actually Change Anything?
It's fair to ask. The Rome Call is voluntary. There are no enforcement mechanisms, no penalties, no audits. Father Benanti's answer: "The Rome Call for AI is not about evaluation or compliance. It's an offer of value, for individuals, for companies, for society, in a voluntary form."
That's compelling, but it has limits. The gap between voluntary commitments and binding regulation is hard to ignore — especially in a world that ChatGPT and GPT-4 arrived into. To the Vatican's credit, they've updated the framework. At Hiroshima, Benanti presented a specific addendum on the governance of generative AI. The coalition is still learning.
The Bottom Line
The Rome Call for AI Ethics is a voluntary framework with real limitations. But it has quietly gathered an extraordinary coalition around principles that put human dignity at the center of AI development. It introduced "algorethics" before most people knew what large language models were. It put the question of lethal autonomous weapons on the table at a moment when the world needed someone to say it plainly.
And it did all of this from Vatican City — 44 hectares, 800 people, and two millennia of practice thinking about what matters most. That's actually quite a lot.
Sources: Pontifical Academy for Life · RenAIssance Foundation · Vatican News · Catholic News Agency · CIO Magazine · USCCB · Cisco Newsroom