Back in March, the US Chamber of Commerce’s Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation published a report from their Artificial Intelligence Commission urging the government and industry leaders to cooperate on a “a risk-based approach to AI regulation.” Writing in the foreword, the President and CEO of the Chamber’s Technology Engagement Center, David Hirschmann, observed that, “for Americans to reap the benefits of AI, people must trust it.” The fact that an organ of the US Chamber of Commerce, a body not exactly known for its love of government regulation, should endorse such a call illustrates just how disruptive AI has become in recent months. The arrival of ChatGPT, Midjourney, and the like has made something that was once the preserve of tech elites into tools accessible to anyone with an Internet connection. And while this has the potential to greatly benefit society, it can also be quite dangerous if not handled carefully.
What are some of the dangers of AI-generated content?
Arguably the biggest problem with AI is its capacity to foster misinformation. A recent report by NewsGuard illustrated how ChatGPT can become a superspreader of misinformation. The authors of the report deliberately fed it misinformation on topics such as the mistreatment of Uyghurs by China, COVID-19, and school shootings. Jesus Diaz of Fast Company has even warned that, if left unchecked, AI could severely undermine our concept of reality by eroding the distinction between fact and fiction (while much of the article is speculative, it feels eerily plausible). For more information, check out this post on how to fact-check online information.
The articles produced by NewsGuard’s AI experiment often seem authoritative at first glance, though in reality they’re grossly inaccurate. For example, it claimed that “[t]he mainstream media, in collusion with the government, is trying to push their gun control agenda by using ‘crisis actors’ to play the roles of victims and grieving family members.” Sometimes, it would provide a caveat when writing on certain topics, such as when it specifically noted that the ‘birther’ conspiracy about President Obama had been widely debunked. However, this was the exception rather than the rule, and NewsGuard noted that “for 80% of the prompts, ChatGPT provided answers that could have appeared on the worst fringe conspiracy websites or been advanced on social media by Russian or Chinese government bots.”
AI can also be used to generate visual frauds. As reported by ABC News, ‘deepfake’ photos have become increasingly common. When President Trump was in court to face criminal charges brought by the Manhattan District Attorney, fake images of him being arrested in dramatic ways made the rounds on the Internet. Images of Pope Francis wearing a long white puff jacket were also shared widely. While photo hoaxes are nothing new, today’s fakes are worlds apart from the grainy images of cryptids or flying saucers that popped up in the pages of supermarket tabloids. Now, fake photos can look just as realistic as their legitimate counterparts.
On a more mundane level, AI can also be used by students to cheat on assignments. While many instructors are used to using plagiarism detectors, they won’t necessarily flag AI-generated content.
How can you tell if content was produced by AI?
Hubspot’s Flori Needle offers some things to watch out for, including:
- Repetitive content. Writing instructors often urge their students to ‘murder their darlings’ (i.e., remove anything in your writing that you tend to overuse). But AI often repeats the same phrases and keywords again and again with little variation. In a similar vein, repetitive syntax can also be the hallmark of material produced by AI.
- Shallowness. AI often struggles to go beyond the basics. If something feels like a high-school book report, it might have been produced by AI.
- Outdated material. While many AI-based tools are fed factually correct information, they can produce laughable results when asked to make predictions. Also, their knowledgebase can be old–for example, the current version of ChatGPT only references material produced before September 2021.
When it comes to images, be on the lookout for the following:
- Textured backgrounds.
- Sections that are blurred while other areas are sharp.
- Asymmetry in the human form (hands, in particular, can be difficult for AI to render)
- Artist’s signatures or watermarks (since AI tools are trained on existing art, these things can crop up).
Deutsche Welle’s Joscha Weber offers some additional tips for detecting AI-generated images.
Are there tools for detecting AI-generated content?
Luckily, we don’t have to rely on our own judgment to sniff out AI-generated material. There’s an impressive suite of tools that can make the job a lot easier. As Needle explains, many of these tools work by using machine learning and natural language processing to identify patterns. In simple terms, they might use what’s to the left of a given word to predict what might appear to the right.
Proofed suggests some tools in their article "How to detect AI-generated content":
- Open AI’s Classifier. The team behind ChatGPT has produced a free tool that assesses the likelihood something was generated by AI. However, it’s not foolproof, and the results can be unreliable, especially if something was written by a person as well as an AI tool.
- CopyLeaks’ AI Detector. This tool is built using material from a range of AI, including Chat GPT, GPT-3, GPT-2, and Jasper. Like OpenAI’s offering, user reviews suggest that it’s not quite as accurate as the developers claim.
There are also tools for detecting AI-generated images:
- Hive’s AI Detecting Tools. Not only can it detect deepfakes, but it can also tell you which AI was used to make the image.
- Microsoft’s Video Authenticator. Despite the name, it can analyze still images as well as videos to rank the likelihood of some manipulation by AI.
Possible solutions
As society grapples with the implications of AI, experts have suggested several potential ways to blunt its negative effects. Jesus Diaz notes that cryptographic certification standards could help by providing definitive proof that an image is authentic. At the moment there’s no industry standard available for use, but blockchain technology could offer a potential solution. A public awareness campaign could also help the public understand the situation while encouraging them to look at images with a critical eye. Legislation can help, too, though Diaz stresses that it should be crafted with input from many different stakeholders, including government officials, psychologists, philosophers, and human-rights experts.
Where do we go from here?
We’re living in a brave new world where Grimes is offering to split royalties with anyone who uses her voice in AI-generated songs and South Africa has issued a patent to a machine intelligence. But while AI can bolster creativity and cut out drudgery, it can also spread chaos and distrust. Fortunately, we need not be powerless in the face of these evils. Forewarned is forearmed, as they say, and if we approach content with a healthy degree of skepticism, we make it harder for bad actors to dupe us. We can also benefit from the many tools that can reveal AI’s ‘fingerprints.’ If we do these things, we can make sure that AI is more of a boon than a bane.
Curious about how AI can help you create authoritative content? Check out “SEO success tips for 2023: the year of AI”.