As AI technology, capable of generating text, images, audio and deepfake videos, becomes more advanced, there is apprehension that it will further accelerate disinformation in the politically polarized landscape, undermining voter confidence.
Ethan Bueno de Mesquita, from the University of Chicago, refers to the 2024 election as an "AI election," emphasizing the novel challenges AI presents in politics. There is fear that AI chatbots could mislead voters about essential election information, and AI could be used to create and spread false information against candidates or issues.
Polls indicate a growing worry among Americans about AI's role in spreading false information. A UChicago Harris/AP-NORC poll showed a bipartisan majority concerned about AI increasing disinformation. Similarly, a Morning Consult-Axios survey revealed an increase in U.S. adults who believe AI will negatively impact trust in candidate advertisements and election outcomes. Approximately 60% of respondents think AI-spread dis- and misinformation will affect the presidential race.
Instances of AI use in politics are already surfacing. For example, an AI-generated version of former President Trump’s voice was used in a political ad, and his campaign released altered videos with voiceovers criticizing his opponents. And meet Ashley, the world’s first AI-powered political campaign caller -- what could possibly go wrong?
There is a push for regulation. Google now requires election advertisers to disclose digitally generated or altered ads. Meta demands similar disclosures for photorealistic or realistically altered political ads. President Biden issued an executive order on AI, including safety standards and guidelines for content authentication.
Experts acknowledge AI's potential for positive uses in elections, such as voter list maintenance and issue-based candidate matching. However, concerns remain about AI-enhanced micro-targeting of misinformation. Nicole Schneidman from Protect Democracy suggests that while AI may not introduce new threats, it could amplify existing ones. The focus should be on mitigating known threats rather than anticipating every AI use case.
The article concludes with a quote from Schneidman: "The advantages that pre-bunking gives us is crafting effective counter messaging that anticipates recurring disinformation narratives and hopefully getting that in the hands and in front of the eyes of voters far in advance of the election, consistently ensuring that message is landing with voters so that they are getting the authoritative information that they need,” We could not agree more. Training your workforce to recognize social engineering in any form is a critical part of your human firewall.
Full article here: https://thehill.com/homenews/campaign/4371959-ai-artificial-intelligence-2024-election-deepfake-trump/