How To Politely Tell Someone They Fell For An AI-Generated Image

With the election approaching, there’s no shortage of deepfakes out there. Should you call it out if someone posts one?
Anyone can fall for deepfakes, especially now that AI tech is getting less low budget and more sophisticated.
Illustration:Jianan Liu/HuffPost;Photo:Getty Images
Anyone can fall for deepfakes, especially now that AI tech is getting less low budget and more sophisticated.

With few guardrails in place, social media is a wasteland of fake, artificial intelligence-augmented images and audio. And with the November election approaching, there seem to be more deepfakes out there than ever.

No, Taylor Swift did not release a poster endorsing GOP presidential nominee Donald Trump while wearing full Uncle Sam regalia. (That didn’t stop the former president from sharing it on his Truth Social platform.)

That “campaign video” ― reposted by tech billionaire Elon Musk ― in which the Democratic presidential nominee, Vice President Kamala Harris, mentions President Joe Biden’s “senility”? Also fake.

In a more unnerving example from earlier this year, robocalls using AI to mimic Biden’s voice discouraged voters from participating in the New Hampshire primary.

Trump, who himself has gotten the AI-manipulated photo treatment — remember those fake images of him in police custody before his first criminal indictment? ― has further sowed the seeds of distrust by suggesting that images of Harris’ crowds at campaign events were AI-augmented.

A whopping 78% of American adults expect an abuse of deepfakes and artificial intelligence systems to affect the outcome of the 2024 presidential election, according to a spring survey by the Elon University Poll and the Imagining the Digital Future Center at North Carolina’s Elon University.

Both Kamala Harris, left, and Donald Trump have been the subjects of AI-manipulated media amid the 2024 presidential election.
Montinique Monroe / Luke Hales // Getty Images
Both Kamala Harris, left, and Donald Trump have been the subjects of AI-manipulated media amid the 2024 presidential election.

According to that same survey, 45% of American adults say they’re not confident that they can detect fake photos.

“It was very striking that this uncertainty was across the board among demographic groups,” Lee Rainie, the director of the Imagining the Digital Future Center, told HuffPost.

“Young people and their elders, well educated and less well educated, men and women, Republicans and Democrats, all expressed some level of this self-doubt. I take that to mean that Americans themselves worry they can be victimized,” he said.

While baby boomers tend to get a bad rap for falling for some of the more absurdist examples of AI ― like skiing dogs and toddlers, or obvious famine porn that tugs on the heartstrings ― anyone can fall for deepfakes, especially now that AI tech is getting more sophisticated.

Facebook has turned into an endless scroll of AI photos and the boomers don’t appear to have noticed pic.twitter.com/qvkIbEGgQw

— Justine Moore (@venturetwins) February 19, 2024

Oftentimes, people get duped by AI content that reinforces their own interests or preferences, said Julia Feerrar, an associate professor and the head of digital literacy initiatives at the University Libraries at Virginia Tech.

“I’ve almost been fooled multiple times by fake social media posts about reboots of my favorite TV shows,” she said. “So much misleading content is created to appeal to our emotions, whether that’s shock, anger or excitement. And it’s such a human thing to want to act on those feelings by sharing.”

If anyone can fall for these, at some point it’s likely to be one of your own friends or family members. When they do share fake content ― or send it to the group text ― should you say something, or just let it be? Here’s what experts think.

Ask yourself: Could this post do damage to a person’s reputation?

If someone has staked their reputation on a picture, video or piece of audio content by sharing or recommending it, they’d probably want to take it down if they found out it was fake. Rainie thinks it would be generous and empathetic to discreetly point out the mistake.

“You know that Ad Council public service message a few decades ago against drunk driving that had the tagline ‘friends don’t let friends drive drunk’? In the age of deepfakes that can be shared widely on social media, the equivalent ad nowadays could be ‘friends don’t let friends look like idiots,’” he said.

"I think we’re still at a point where most people aren’t actively looking out for AI-generated content and probably shared with no ill intent," said Julia Feerrar of the University Libraries at Virginia Tech.
urbazon via Getty Images
"I think we’re still at a point where most people aren’t actively looking out for AI-generated content and probably shared with no ill intent," said Julia Feerrar of the University Libraries at Virginia Tech.

Give some thought to how calling it out might impact your relationship

Feerrar said that, in general, pointing out a fake image is usually worthwhile.

“I think we’re still at a point where most people aren’t actively looking out for AI-generated content and probably shared with no ill intent,” she said. “A gentle nudge from a friend can go a long way in building awareness.”

It’s a context-specific decision, though, she said. Sometimes, the right choice for your digital well-being may be not to engage with harmless misinformation or a stupid, doctored pic. (It’s probably OK to just let Nana believe that image of a swagged-out Pope Francis is real.)

“In my conversations with students at Virginia Tech, the stakes of the content itself and the relationship you have with the person who shared it often come up as important factors in the decision to call out misinformation publicly, to message someone privately or just keep scrolling,” Feerrar said.

Do it off-thread if you can

Most of us don’t want to be confrontational. If you do want to speak up, shaming people in public never accomplishes much and can even escalate the problem, said Janet Coats, the managing director of the Consortium on Trust in Media and Technology at the University of Florida.

That’s why she recommends messaging off-thread, and preferably talking to people in a face-to-face conversation or a phone call rather than in texts or private messages.

“Our research has found that one-on-one conversation gives people space to listen and to reason, rather than default immediately to being defensive,” Coats told HuffPost. “The best chance we have for improving information quality is when we actually talk to each other.”

If you're going to alert someone that a pic they shared online is fake, consider doing it off-thread rather than on a post.
Hiraman via Getty Images
If you're going to alert someone that a pic they shared online is fake, consider doing it off-thread rather than on a post.

Finally, remember that anyone can fall for AI images, so stay aware yourself

Right now, AI-generated images still usually have a strange, hyperreal quality and inconsistencies in the things they depict, including garbled text, awkward transitions between objects, and malformed hands. (There might be eight fingers on one hand or fingers protruding from the middle of a palm, for example.) But as AI tools continue to improve, it might be you who falls for such images at some point.

If you see political or any other content that sparks big emotions or raises red flags, pause and look at it with a critical eye, Feerrar said. When determining if an image is real, you don’t want to rely solely on how it looks.

“Open your search engine of choice, describe what you’re seeing and add the phrase ‘fact check’ to your search,” she said. “Those search results should help you assess the accuracy of the content in question.”

Feerrar said to ask yourself: Where did this content come from? Is this from a reputable news organization or trusted platform? And can I find other sources sharing it or reporting on the same thing?

“Luckily those questions can often be answered with a quick search in a new browser window,” she said.

As long as tech algorithms continue to promote AI content, this kind of reflexive, DIY fact-checking will have to become part of our evolving digital literacy. “It’s going to take all of us to keep figuring out what it means to be a person in our digital world,” Feerrar said.

Before You Go

LOADINGERROR LOADING
Close