Nearly two-thirds (64%) of Americans believe an AI deepfake political robocall could be convincing enough to impact the outcome of the 2024 US Presidential election – affirming the high stakes facing election officials, policymakers and regulators seeking to protect voters and election integrity in the coming weeks.
The survey data comes from a new Transaction Network Services (ĢƵ) eBook released today: America Votes – 2024 US Presidential Election in Robocalls. The eBook details US adults’ fears of election interference from bad actors – foreign and domestic – using increasingly convincing AI deepfake technology to target voters, as well as who voters believe holds responsibility to combat the threat.
“The President Biden deepfake robocall targeting New Hampshire voters earlier this year served as a wake-up call on how AI can and will be used in potential disinformation campaigns as Election Day draws closer,” said Denny Randolph, President of ĢƵ’ Communications Market business. “ĢƵ has tracked political robocall volume since the 2018 US midterm elections, and our new eBook and survey data show Americans are not only worried about AI being used to trick voters, but that Generative AI has made it harder to tell the difference between legitimate and deepfake robocalls and robotexts.”
Several key data narratives emerged from the survey:
AI deepfakes undermine trust in voting information
ĢƵ’ survey suggests Americans are aware of how AI can be used to influence their voting and election behaviors.
- 60% of US adults believe robocalls and robotexts are being used to undermine confidence in the 2024 Presidential Election.
Battleground states, key local races at risk
ĢƵ analyzes 1.5 billion daily call events across hundreds of carrier networks. Americans expect bad actors will target key local races and swing states with political robocalls.
- 71% of Americans believe a voter in a battleground state is more likely to be targeted by an AI deepfake robocall attempt than a voter in a non-battleground state.
Americans support “Pre-Bunking”
Rather than focusing on debunking disinformation after the fact, election officials and other stakeholders increasingly view pre-bunking (preventing disinformation before it spreads) as an effective strategy. ĢƵ’ survey affirms support for this approach:
- 77% agree that policymakers and regulators should educate Americans on the risks of political AI deepfakes and how to protect against them.
Boomers least trusting of political robocalls
ĢƵ survey reveals interesting generational variations, with boomers (age 55-64) least likely to trust robocall and robotext election information.
- Only 22% of boomers are more willing to trust a robocall if it includes an automated message from the candidate – compared to 43% of all Americans surveyed.
ĢƵ commissioned KANTAR to survey more than 1,000 US adults aged 18-64 between August 15-19, 2024. To receive all the survey data, download a copy of the ĢƵ’ latest eBook tnsi.com/resource/com/america-votes-2024-us-presidential-election-in-robocalls-ebook.
America Votes – 2024 US Presidential Election in Robocalls
Complete the form for instant access to this eBook and find out more about how subscribers are perceiving political robocalls in the run-up to the 2024 US Presidential election.