Navigating the pitfalls of artificial intelligence

An AI-generated image of a robot on a red background

Table of Contents

AI has been on quite the journey. Until recently, it was largely the domain of techies and science fiction authors. But the release of ChatGPT in November 2022 proved to be a turning point and the coming months saw countless companies seek a piece of the action. Less than two years later, AI tools are so ubiquitous that they’re being incorporated into everyday programs like Windows 11 and Adobe Acrobat. Of course, this has led to a host of quandaries as humanity struggles to adjust to this new reality, and there have been many cases where AI has caused more problems than it has solved. 

How does AI work?

Generative AI works by making predictions. If you ask it to create a picture of a car driving down the street, the AI has to figure out what you want to see. Because you asked for a ‘car,’ it can assume that you want to see a four wheeled vehicle. And since cars are a modern phenomenon, the street shouldn’t look like something out of ancient Rome. To make these predictions, generative AIs are provided with large amounts of material to help them understand concepts like ‘cars’ and ‘streets.’ For more information on how AI is trained, check out my post on teaching machines to create.

The material used to train an AI can make all the difference in the world. If that information is flawed, the AI will inevitably make flawed predictions. In this post, we’ll look at cases where problematic training material caused AIs to go awry, resulting in customer service failures, content generation issues, and legal/ethical issues.

Customer disservice

Would you like bacon on your ice cream?

McDonald recently decided to put the kibosh on the AI-assisted ordering technology that it had deployed at around 100 of its drive-throughs. The fast-food giant had partnered with IBM to test the new tech, but it became the butt of jokes on social media as users reported a slew of mishaps, from having hundreds of dollars worth of Chicken McNuggets added to their order to receiving an ice cream cone topped with bacon. 

New Zealand supermarket’s meal planner suggests making chemical weapons

Pak’nSave rolled out a chatbot in July 2023 that was meant to help its customers plan their meals. But one user received a recipe for chlorine gas, which the chatbot euphemistically referred to as an ‘aromatic water mixture’. Wilfred Owen wouldn’t be amused…

Content generation missteps

Grok accuses an NBA star of being a vandal  

Grok is an AI chatbot used by X, and it recently made headlines when it created an article claiming that Klay Thompson of the Golden State Warriors had vandalized multiple houses with bricks in Sacramento. Apparently, this occurred after Grok latched on to social media posts which mentioned Thompson “shooting bricks” (i.e., missing) and interpreted the idiom a bit too literally. 

Microsoft’s AI adds a ghoulish poll to a story

The Guardian wasn’t happy with Microsoft when the tech giant’s news aggregation service placed a rather grotesque poll next to a story about the tragic passing of a young Australian water polo coach. The poll in question invited readers to vote on her cause of death, with options including murder, accident, or suicide. Although The Guardian had nothing to do with the poll, many readers didn’t realize that and so the paper was roundly condemned for its perceived insensitivity. 

Google’s AI creates ethnically diverse Nazis

Google’s Gemini AI received flak after it created bizarrely anachronistic images such as a tableaux of World War II-era German soldiers which included people of color. Similarly, when asked to create images of US senators from the 1800s, many of the results depicted women and people of color (suffice it to say, the US Senate wasn’t exactly a bastion of diversity in the 1800s!). While racial and gender biases in AI are definitely a problem, Google’s efforts to address the issue just opened up a different can of worms. 

Legal/ethical conundrums

New York City chatbot encourages business owners to become criminals

The Big Apple decided it might be a good idea to roll out a chatbot that could provide advice to small-business owners. However, some of its answers were…dubious. It claimed that it was okay to fire employees for reporting sexual harassment or failing to disclose a pregnancy (in reality, it would be illegal to do either). It even claimed that a restaurant could serve customers cheese that was partially eaten by rodents! 

Air Canada’s chatbot makes a costly mistake

New York City isn’t the only entity with a rogue chatbot on its hands. Air Canada found itself in hot water after its chatbot told a passenger that he could buy tickets to travel for his grandmother’s funeral and then apply for a bereavement fare afterward. In reality, Air Canada’s policy required such requests to be made before the booking. Although Air Canada argued that they shouldn’t be held liable for the chatbot’s bad advice, the British Columbia Civil Resolution Tribunal disagreed and sided with the passenger.

Navigating a minefield: the legal and ethical issues of AI

As the Air Canada case shows, mistakes caused by AI can have serious consequences. But that’s only the tip of the iceberg. Even the act of training AIs has become controversial due to the fact that many developers have used copyrighted material in these datasets. They argue that this is permitted under the fair-use doctrine and see it as no different than a human reading Harry Potter and being inspired to write a tale of their own about a young wizard who studies at a school for magic-users. 

Many creatives see it differently. For example, an artist named Erin Hanson noted that Stable Diffusion was able to create a series of images in her unique style. Before the rise of generative AI, anyone who wanted to have an Erin Hanson-style image would have had to commission something from her. Now, all it takes is the right prompt. And if someone chooses that path, Hanson wouldn’t receive any compensation even though these AIs are only able to replicate her style because they’ve been trained on her existing works.

Companies also need to be aware that their reputations can be severely damaged if their AI produces problematic material. It’s a great way to go viral for all the wrong reasons, so it pays to tread carefully when deploying AI. 

AI is a powerful tool, but it’s a tool nonetheless

While AI can seem like one giant hazard, there are steps that companies and individuals can take to reduce the chances of AI-related headaches down the line. These include:

  • Rigorously testing the AI before deployment
  • Establishing clear guidelines to regulate how they use AI
  • Monitoring the AI’s output to make sure it’s still functioning properly

It’s also important to remember that AI is just a tool at the end of the day. Just as you wouldn’t prop up a jackhammer and then turn it on, you shouldn’t operate AI without human supervision, either. The human element is key, as is transparency. This way, when AI falls short, there will be someone who can intervene. 

Additional resources:

Beneath the Code: Dissecting AI’s Fundamental Risks and Their Countermeasures

Risk mitigation a top priority for corporates in the age of AI

How to Address AI Risks in Business

Balancing AI and human values: the Vatican's call for ethical AI

Illustration of colorful books on a shelf against a dark background.