How to spot deepfakes: Voters are seeing more and fear their influence
Weeks after former President Trump survived an assassination attempt in Butler, Pa., a video circulated on social media that appeared to show Vice President Kamala Harris saying at a rally, “Donald Trump can’t even die with dignity.”
The clip provoked outrage, but it was a sham — Harris never said that. The line was read by an AI-generated voice that sounded uncannily like Harris’ and then spliced into a speech Harris actually gave.
A huge percentage of voters are seeing this sort of manipulation, and there’s growing concern about its effect on elections, according to a new survey of 2,000 adults by market research company 3Gem. The survey, commissioned by the cybersecurity company McAfee, found that 63% of the people interviewed had seen a deepfake in the previous 60 days, with 15% exposed to 10 or more.
Exposure to a variety of deepfakes was fairly uniform across the country, the survey said, with political deepfakes being the most common type seen. But politically themed deepfakes were especially prevalent in Michigan, Pennsylvania, North Carolina, Nevada and Wisconsin — swing states whose votes could decide the presidential election.
In most cases, survey respondents said, the deepfakes were parodies; a minority (40%) were designed to mislead. But even parodies and nondeceptive deepfakes can subliminally affect viewers by confirming their biases or reducing their trust in media, said Ryan Culkin, chief counseling officer at Thriveworks, a national provider of mental health services.
“It’s just adding another layer to an already stressful time,” Culkin said.
An overwhelming majority of the people surveyed for McAfee — 91% — said they were concerned about deepfakes interfering with the election, possibly by altering the public’s impression of a candidate or by affecting the election results. Almost 40% described themselves as highly concerned. Possibly because of the time of year, worries about deepfakes influencing elections, gaslighting the public or undermining trust in media were all up sharply from a survey in January, while concerns about deepfakes used for cyberbullying, scams and fake pornography were all down, the survey found.
Two other findings of note: Seven out of 10 respondents said they came across material at least once a week that made them wonder if it was real or AI-generated. Six out of 10 said they weren’t confident that they could answer that question.
At the moment, no federal or California statute specifically blocks deepfakes in ads. Gov. Gavin Newsom signed a bill into law last month that would have prohibited deceptive, digitally altered campaign materials within 120 days of an election, but a federal judge temporarily blocked it on 1st Amendment grounds.
Jeffrey Rosenthal, a partner at the law firm Blank Rome and an expert in privacy law, said California law does prohibit “materially deceptive” campaign ads within 60 days of an election. The state’s enhanced barrier to deepfakes in ads will not kick in until next year, however, when a new law will require political ads to be labeled if they contain AI-generated content, he said.
What you can do about deepfakes
McAfee is one of several companies offering software tools that help sniff out media with AI-generated content. Two others are Hiya and BitMind, which offer free extensions for the Google Chrome browser that flag suspected deepfakes.
Patchen Noelke, vice president of marketing for Hiya in Seattle, said his company’s technology looks at audio data for patterns that suggest it was generated by a computer instead of a human. It’s a cat-and-mouse game, Noelke said; fraudsters will come up with ways to evade detection, and companies like Hiya will adapt to meet them.
Ken Jon Miyachi, co-founder of BitMind in Austin, Texas, said at this point his company’s technology works only on still images, although it will have updates to detect AI in video and audio files in the coming months. But the tools for generating deepfakes are ahead of the tools for detecting them at this point, he said, in part because “there’s significantly more investment that’s gone into the generative side.”
That’s one reason it helps to maintain what McAfee Chief Technical Officer Steve Grobman called a healthy skepticism about the material you see online.
“We all can be susceptible” to a deepfake, he said, “especially when it’s confirming a natural bias that we already have.”
Also, bear in mind that images and sounds generated by artificial intelligence can be embedded in otherwise authentic material. “Taking a video and manipulating just five seconds of it can really change the tone, the message,” Grobman said.
“You don’t have to change a lot. One sentence inserted into a speech at the right time can really change the meaning.”
State Sen. Josh Becker (D-Menlo Park) noted that there are at least three state laws due to take effect next year to require more disclosure of AI-generated content, including one he authored, the California AI Transparency Act. Even with those measures, he said, the state still needs residents to take an active role in spotting and stopping disinformation.
He said the four main things people can do are to question content that provokes strong emotions, verify the source of information, share information only from reliable sources, and report suspicious content to election officials and the platforms where it’s being shared. “If something hits you very emotionally,” Becker said, “it’s probably worth taking a step back to think, where does this come from?”
On its website, McAfee offers a set of tips for identifying probable deepfakes, avoiding election-related scams and not spreading bogus media. These include:
- In texts, look for repetition, shallow reasoning and a dearth of facts. “AI often says a lot without saying much at all, hiding behind a glut of weighty vocabulary to appear informed,” the site advises.
- In image and audio, zoom in to look for inconsistencies and odd movements by the speaker and listen for sounds that don’t match what you’re seeing.
- Try to corroborate the material with content from other, well-established sites.
- Don’t take anything at face value.
- Examine the source, and if the material is an excerpt, try to find the original media in context.
For anything you don’t see with your own eyes or view through a 100% trustworthy source, “assume it might be photoshopped,” Grobman advised. He also warned that it’s easy for fraudsters to clone official election sites, then change some of the details, such as the location and hours of polling places.
That’s why you should trust voting-related sites only if their URLs end in .gov, he said, adding, “If you don’t know where to start, you can start at Vote.gov.” The site offers information about elections and voting rights, as well as links to every state’s official elections site.
“The ability to have so much of our digital world be potentially fake degrades trust all around,” Grobman said. At the same time, he said, “when there is legitimate evidence of malfeasance, of a crime, of unethical behavior, it’s all too easy to claim it was fake. … Our ability to hold individuals accountable when evidence does exist is also damaged by the rampant availability of digital fakes.”