AI ethics and the Vatican: A match made in heaven 

In late February 2020, leaders from IBM and Microsoft signed a Vatican pledge, backed by Pope Francis, calling for the development of ethical, “human-centred” ways of designing artificial intelligence (AI).

The pledge – called the Rome Call for AI Ethics – was co-signed by Microsoft President Brad Smith, IBM Executive VP John Kelly, and President of the Pontifical Academy for Life Archbishop Vincenzo Paglia. The pledge also received the endorsement of notable public officials, including Qu Dongyu, the Director of the UN Food and Agricultural Organization, and Paola Pisano, Italy’s Minister for Technology and Innovation.

The Rome Call for AI Ethics proposes six key tenants similar to the European Union’s non-binding guidelines for “trustworthy AI” and the Trump administration’s guidance for federal regulation of AI. Broadly, the goal of the Rome Call and the new partnership between leaders in the private and public sectors and the religious sphere is to increase dialogue on the role of ethics in technological development. The pledge calls for AI developers to place human beings and nature at the heart of digital innovation development by supporting regulation in “high-risk” fields such as facial recognition and by instating a “duty of explanation” that would require AI algorithms to identify objectives and justify findings. 

Microsoft’s Brad Smith commented on the seemingly unlikely collaboration between business, religious, and political representatives saying “we need people who can work to close the gaps that divide us – so in some ways I look at the Catholic Church, I look at a company like Microsoft, and I say why not?”

Francesca Rossi, IBM’s global AI ethics leader, added “the Vatican is not an expert on technology, but on values. The collaboration is to make the Vatican and the whole of society understand how to use this technology with these values in mind.”

While the Vatican’s position as a moral authority and a promoter of technological development is debatable, the Catholic Church has particularly strong beliefs about the human purpose, shared by over one billion people globally. Perhaps the idea behind this collaboration is for those values to guide the decisions of potentially overzealous AI entrepreneurs and developers who, if unchecked, may act without considering the implications of technological innovation on life, humanity, and legal personality.

Contrarily, that sensibility is predicated on the assumption that this is an act of good faith from all parties involved and not an attempt by two of the world’s leading AI developers to get into the good graces of the Church and its followers. Though there may be some truth in that, it would be overly pessimistic to discount the new partnership as being merely in the interest of public relations. It is likely that the executives of these corporations harbour some of the same fears that the average person does with respect to unregulated development of AI. 

Although the Vatican today has significantly less power and influence than in centuries past, it remains an important institution with the ability to sway not only public opinion, but also the actions of other multilateral institutions, governments, and world leaders. That said, the Vatican’s AI agreement is mostly a symbolic measure. The Rome Call is unlikely to have any substantial legal or contractual authority, but the fact that Microsoft and IBM are inaugural signatories, in addition to the large swathes of business executives and government officials that have expressed concern over AI ethics, demonstrates the fact that many world institutions share some of the concerns of the Catholic Church.

The Vatican’s involvement in the regulation of AI may appear strange, however, the move is yet another one of the Church’s efforts to modernize over the last two decades – especially under Pope Francis. 

The Pontifical Academy for Life (PAL) was created 25 years ago by Pope John Paul II in response to rapid developments in biomedicine and concerns over genetic engineering and since then, has weighed in on other areas of technological development that may have drastic costs to society and humanity without responsible decision-making. 

Last year, Archbishop Paglia – President of the PAL and a key voice in the drafting of the Rome Call – responded to comments made by Japanese scientist and innovator Hiroshi Ishiguro, who has become an icon for creating extremely human-like robots in his lab at Osaka University. At a conference on Robo-ethics held at the Vatican last year, Ishiguro said that “the ultimate aim of human evolution is immortality by replacing the flesh and bones with inorganic material.” In essence, he sees development of artificially intelligent androids that are basically indistinguishable from organic humans as the next phase in human evolution. 

Archbishop Paglia responded: “This dream is a terrible dream [...] The risk is we forget we are creatures, not creators.” To the Church and its followers, Ishiguro’s comments are downright blasphemous – and even to non-religious people, they might sound pretty alarming. To Ishiguro, a futuristic innovator and Japanese citizen whose country is facing demographic decline due to its aging population and low immigration rates, the Church is limiting humanity’s potential, not protecting it.

The stark contrast between these two views of humanity are why this new collaboration to create the Rome Call is a positive development in global leadership on AI. If it seems like an unlikely partnership, that’s because it is – and frankly, we need more of that in the world today: two groups with fundamentally different world views coming together to reconcile differences in pursuit of a common goal. The common ground between them has the potential to provide a foundation for ensuring that the future development of AI is both sustainable and ethical, with the best interests of humanity at heart.  

Anthony Moniuszko

Anthony graduated from Carleton University in 2019 with an Honours Bachelor of Arts in Business Law and a minor in Business. As an undergraduate student, Anthony worked for Carleton’s Athletics Department as the Competitive Clubs Coordinator, volunteered at Operation Come Home Ottawa, and co-founded the Carleton Cryptocurrency Club. Anthony and one of his former professors at Carleton recently collaborated on a short paper regarding smart contracts and blockchains in the context of contractual incompleteness and holdup theory. His primary research interests include international law, trade regulation, and global security. After completing his Master of Global Affairs degree, Anthony hopes to obtain a law degree and eventually begin working in the field of international law and trade regulation.

Previous
Previous

India: Hope in preparing for the worst?

Next
Next

The fog of war: Revising the existing framework surrounding AI & counterterrorism policy in the E.U.