The AI Threat: How America's 2024 Election Could Be Compromised

Image generated by ChatGPTI found an interesting article at THE HILL that discusses the rising concerns about how AI might influence the upcoming U.S. 2024 elections.

As AI technology, capable of generating text, images, audio and deepfake videos, becomes more advanced, there is apprehension that it will further accelerate disinformation in the politically polarized landscape, undermining voter confidence.

Ethan Bueno de Mesquita, from the University of Chicago, refers to the 2024 election as an "AI election," emphasizing the novel challenges AI presents in politics. There is fear that AI chatbots could mislead voters about essential election information, and AI could be used to create and spread false information against candidates or issues.

Polls indicate a growing worry among Americans about AI's role in spreading false information. A UChicago Harris/AP-NORC poll showed a bipartisan majority concerned about AI increasing disinformation. Similarly, a Morning Consult-Axios survey revealed an increase in U.S. adults who believe AI will negatively impact trust in candidate advertisements and election outcomes. Approximately 60% of respondents think AI-spread dis- and misinformation will affect the presidential race. 

Instances of AI use in politics are already surfacing. For example, an AI-generated version of former President Trump’s voice was used in a political ad, and his campaign released altered videos with voiceovers criticizing his opponents. And meet Ashley, the world’s first AI-powered political campaign caller -- what could possibly go wrong?

There is a push for regulation. Google now requires election advertisers to disclose digitally generated or altered ads. Meta demands similar disclosures for photorealistic or realistically altered political ads. President Biden issued an executive order on AI, including safety standards and guidelines for content authentication.

Experts acknowledge AI's potential for positive uses in elections, such as voter list maintenance and issue-based candidate matching. However, concerns remain about AI-enhanced micro-targeting of misinformation. Nicole Schneidman from Protect Democracy suggests that while AI may not introduce new threats, it could amplify existing ones. The focus should be on mitigating known threats rather than anticipating every AI use case.

The article concludes with a quote from Schneidman: "The advantages that pre-bunking gives us is crafting effective counter messaging that anticipates recurring disinformation narratives and hopefully getting that in the hands and in front of the eyes of voters far in advance of the election, consistently ensuring that message is landing with voters so that they are getting the authoritative information that they need,” We could not agree more. Training your workforce to recognize social engineering in any form is a critical part of your human firewall.

Full article here:

Will your users respond to phishing emails?

KnowBe4's Phishing Reply Test (PRT) is a complimentary IT security tool that makes it easy for you to check to see if key users in your organization will reply to a highly targeted phishing attack without clicking on a link. PRT will give you quick insights into how many users will take the bait so you can take action to train your users and better protect your organization from these fraudulent attacks!

PRT-imageHere's how it works:

  • Immediately start your test with your choice of three phishing email reply scenarios
  • Spoof a Sender’s name and email address your users know and trust
  • Phishes for user replies and returns the results to you within minutes
  • Get a PDF emailed to you within 24 hours with the percentage of users that replied

Go Phishing Now!

PS: Don't like to click on redirected buttons? Cut & Paste this link in your browser:

Subscribe to Our Blog

Comprehensive Anti-Phishing Guide

Get the latest about social engineering

Subscribe to CyberheistNews