Transparency is the key to building trust with AI-generated content

A surreal image of a brightly colored hallway with busts of women

The other day, a colleague of mine was telling me about a conversation he had with a friend who works in higher education. She lamented the fact that more and more of her students are turning to AI for help with their assignments. Unfortunately, their use is often highly problematic. Sometimes, they’re simply too trusting, resulting in work that’s riddled with misinformation. But in other cases, they’re passing AI-generated content off as their own material. It doesn’t help that the world of AI is a lot like the Wild West at the moment. In a world where formal restrictions are few and far between, it’s up to individual creators to ensure that they’re using AI in an ethical manner. 

Table of Contents

The other day, a colleague of mine was telling me about a conversation he had with a friend who works in higher education. She lamented the fact that more and more of her students are turning to AI for help with their assignments. Unfortunately, their use is often highly problematic. Sometimes, they’re simply too trusting, resulting in work that’s riddled with misinformation. But in other cases, they’re passing AI-generated content off as their own material. It doesn’t help that the world of AI is a lot like the Wild West at the moment. In a world where formal restrictions are few and far between, it’s up to individual creators to ensure that they’re using AI in an ethical manner. 

Why is it important to use AI ethically?

Deceitful content is nothing new. From counterfeit coins to staged photographs of cryptids, there have always been people looking to hoodwink others. But this threat was counterbalanced to some extent by the fact that it traditionally required considerable effort on the part of the fraudster. But with AI, that is no longer the case. With AI, you can create photorealistic fakes with the push of a button. And with social media, deceit can easily be spread far and wide. People are understandably unsettled about the realities of this new world. This means that even the most innocuous use of AI can undermine your reputation if your readers feel like you aren’t being honest with them.  

Transparency is key

Transparency is the cornerstone of ethical AI usage. The world may want to be deceived, or so the adage goes, but individuals tend to get annoyed if they feel like they’ve been lied to. If your audience realizes that you’ve been secretly using AI in public-facing content, it’s going to be hard for them to trust you. If you’re being deceptive about something like that, they may well wonder what else you’re hiding. 

Luckily, simply being upfront about the ways in which you’re using AI can go a long way to establishing trust with your audience. For example, if you used AI to smooth out some rough passages in a post, you could insert a disclaimer saying something like “This post was edited with [insert name of AI tool].” You could even have a dedicated page on your site that explains your use of AI like this example from The San Francisco Chronicle

Note the line about how “no content from our newsrooms will be published without an editor’s review.” This helps assure readers that AI is genuinely adding value to the Chronicle’s content. 

ARLnow’s description of its AI usage is also worth a look. It goes into quite a bit of detail about how they use it on a day-to-day basis. More specifically, it emphasizes that they will primarily use AI for more mundane tasks such as choosing emojis, summarizing content for use on other platforms, and basic proofreading. Again, it assures the reader that they see AI as just one of the many tools in their arsenal.

Be realistic about AI’s capabilities

  • AI is a powerful tool, but it has limitations and can make mistakes. It's not an infallible solution.
  • The quality of AI output depends on its training data, which may contain biases that can skew results.
  • Content creators should review and refine AI-generated content to ensure accuracy, align with intended messages, and avoid reinforcing harmful stereotypes.

AI’s status as a tool is something worth highlighting to your audience. It can be easy to get carried away by the hype and fall into the trap of believing that AI is some all-knowing, infallible oracle. But that’s simply not true. Imagine you’re taking a math test with the aid of a basic calculator. It can trivialize arithmetic problems–if you’re asked to multiply 56 by 653, all you need to do is plug those numbers into your device and you’ll have your answer. On the other hand, if you’re told that a ship sails 10 kilometers east and then 14 kilometers north and you need to calculate how far away it is from its starting point, that calculator isn’t going to help unless you realize that the Pythagorean Theorem is the key to solving the problem. 

The same principle applies to AI. Not only do you need to make sure you’re using the right tool for the job (and check out this post for some of the best AIs for a range of different tasks), but you also need to remember that any AI is only as good as the material it has been trained on. Unfortunately, a lot of this material is going to be biased in some way. For example, Amazon created an AI tool to help them screen job applicants that turned out to be biased against women. This was because it had been trained on resumes submitted to Amazon over the course of a decade. Given that men are disproportionately represented in the tech sector, it’s no wonder Amazon’s AI assistant absorbed that bias (for more information on how bias can shape AI and related technologies, check out this post by MediaJustice). 

While AI creators can (and should) take steps to correct biases in their products, those of us who use AI also need to do our part. We need to make sure we’re always creating mindfully. Let’s say you need AI to generate an image of scientists at work in the lab and it keeps giving you a bunch of white men. Publishing an image like that might not seem like much of a problem, but it subtly reinforces the idea that women and people of color don’t belong in STEM. Rather than perpetuate outdated stereotypes, you should adjust your prompt until you get better results. Of course, efforts to avoid bias can give rise to entirely new problems. Earlier this year, Google was roundly criticized after its Gemini AI created anachronistic images such as one which featured a Black man and an Asian woman as German soldiers from World War II. In their efforts to avoid perpetuating stereotypes, they ordered Gemini to avoid making problematic assumptions, but this led the AI to overcorrect. 

When in doubt, ask your audience

You may worry that openly admitting to AI usage could provoke pushback from your audience. If that’s the case, it might be worth engaging with your readers to find out how they feel about the subject. ARLnow’s AI policy states that they no longer use AI-generated images, and this change came about after they polled their readers and found that almost half of the respondents (48%, to be precise) didn’t want them to use any AI-generated images. These images could also prove distracting, as readers sometimes focused on the illustration rather than the accompanying story. Although the site’s staff had the best of intentions, they decided AI was just too polarizing to use. If your readers feel the same way, you should think carefully about using AI in a public-facing manner. 

AI is a double edged sword

AI has revolutionized the creative process. By allowing bloggers, journalists, writers, and others to offload tasks they don’t enjoy, it has given them more time to spend on the parts of the creative process that they genuinely enjoy. But AI’s versatility and its sheer power has the potential to cause great harm if it isn’t used responsibly. In the absence of formal rules governing its use, anyone using AI needs to make sure they’re doing so in an ethical manner. Fundamentally, this boils down to transparency. Be open with your audience about how you use AI, and don’t be afraid to remind them that it’s just a tool. Assure them that it won’t replace the human touch that makes your content authoritative. And if you’re worried about how they’ll react to AI, don’t be afraid to seek guidance from your audience. By doing these things, you can ensure that AI continues to be a boon instead of a bane.

Illustration of colorful books on a shelf against a dark background.