The other day I stumbled upon an interview that POLITICO did with Fr. Paolo Benanti, a Franciscan friar who advises Pope Francis on matters related to artificial intelligence. Until I read that article, I had no idea that the Vatican was paying such close attention to AI. But I’m glad they do because they can offer a unique and valuable perspective that’s often overlooked.
Benanti’s core argument is that we need to have “human-centric” AI that operates within carefully prescribed limits. As he told POLITICO, “[s]ome people treat AIs like idols, like oracles, like demigods. The risk is that they delegate critical thinking and decisional power to these machines.”
The Rome call for AI ethics
To address this problem. Benanti helped spearhead the Rome Call for AI Ethics back in 2020. It envisions AI that “serves every person and humanity as a whole; that respects the dignity of the human person, so that every individual can benefit from the advances of technology; and that does not have as its sole goal greater profit or the gradual replacement of people in the workplace.” This can be accomplished by adhering to the principles of transparency, inclusion, accountability, impartiality, reliability, and security and privacy. In other words, AI must be overseen by humans, and it should always act in a way that’s just and open.
Cautionary tales: misuse of AI
It’s not hard to see the appeal of these principles given that the news has been filled with a cavalcade of dystopian stories involving AI. The ‘Willy Wonka Chocolate Experience’ in Glasgow, Scotland recently made international headlines due to the gaping chasm between the way it was advertised and the actual event. The organizers appear to have made liberal use of AI-generated imagery when promoting the show. There was little to no quality-control on display, as closer inspection of the images reveals bunnies (?) with unnatural faces and gibberish text. And according to the actor who played Willy Wonka, the script itself may also have been created by AI.
On a far more serious note, AI was used to mimic President Biden’s voice for a robocall which discouraged voters from taking part in New Hampshire’s Democratic primary, while deepfakes of Taylor Swift have been spread on X in which she seems to endorse Donald Trump and cast doubts on the results of the 2020 election. Incredibly, there was another Swift-related deepfake scandal in January when numerous AI-generated pornographic images of her went viral on X.
The importance of digital literacy
These stories show the importance of robust digital literacy. That’s why fact-checking is your friend. Ask yourself questions like–
- Who made it?
- What might their agenda be?
- Does it seem too good to be true?
We should approach everything we see online with the eye of a skeptic, even if it seems to pass the ‘vibe check.’ It’s easy to be deceived by false information simply because it fits our view of the world.
Responsible AI creation
Ultimately, creators also need to be mindful of how they use AI. While there are definitely bad actors who are willing to use AI to cause trouble, there are plenty of creators whose problematic usage of AI has been the result of carelessness or excessive haste rather than malice. One can’t help but wonder if they couldn’t have avoided many of their missteps if they’d simply been more mindful. There was nothing inherently wrong with using AI to help plan the Willy Wonka event, but it’s a tool with limitations, not an omnipotent miracle-worker. If all you do is hit the ‘generate’ button and call it a day, you’re going to end up with advertisements promising “Encherining Entertainment” and scripts with lines like “‘there is a man who lives here, his name is not known, so we call him the unknown.”
Balancing AI and human oversight
The human-centric approach to AI that Benanti advocates could have solved many of these issues. He’s not some militant luddite calling for it to be banned. Instead, he’s simply calling for a thoughtful, deliberate approach to the topic. Instead of giving AI carte blanche to make decisions for us, we must learn to use it as a tool in service for humanity. Keeping humans in the picture will help ensure that AI is constructive rather than destructive.