Generative AI and the Automation of Social Engineering Increasingly Used By Threat Actors



ChatGPT Social EngineeringThreat actors continue to use generative AI tools to craft convincing social engineering attacks, according to Glory Kaburu at Cryptopolitan.

“In the past, poorly worded or grammatically incorrect emails were often telltale signs of phishing attempts,” Kaburu writes.

“Cybersecurity awareness training emphasized identifying such anomalies to thwart potential threats. However, the emergence of ChatGPT has changed the game. Even those with limited English proficiency can now create flawless, convincing messages in perfect English, making it increasingly challenging to detect social engineering attempts."

Legitimate AI tools like ChatGPT attempt to curb malicious results, but threat actors can often find ways around these rules.

“OpenAI has implemented some safeguards in ChatGPT to prevent misuse, but these barriers are not insurmountable, especially for social engineering purposes,” Kaburu says. “Malicious actors can instruct ChatGPT to generate scam emails, which can then be sent with malicious links or requests attached. The process is remarkably efficient, with ChatGPT quickly producing emails like a professional, as demonstrated in a sample email created on request.”

Threat actors can also use AI-generated voice messages to supplement their attacks.

“While ChatGPT primarily focuses on written communication, other AI tools can generate lifelike spoken words that mimic specific individuals,” Kaburu writes. “This voice-mimicking capability opens the door to phone calls that convincingly imitate high-profile figures. This two-pronged approach—credible emails followed by voice calls—adds a layer of deception to social engineering attacks.”

Kabaru offers the following recommendations to help users avoid falling for AI-generated social engineering attacks:

  • “Incorporate AI-generated content in phishing simulations to familiarize employees with AI-generated communication styles."
  • “Integrate generative AI awareness training into cybersecurity programs, highlighting how ChatGPT and similar tools can be exploited."
  • “Employ AI-based cybersecurity tools that leverage machine learning and natural language processing to detect threats and flag suspicious communications for human review."
  • “Utilize ChatGPT-based tools to identify emails written by generative AI, adding an extra layer of security."
  • “Always verify the authenticity of senders in emails, chats, and texts."
  • “Maintain open communication with industry peers and stay informed about emerging scams."
  • “Embrace a zero-trust approach to cybersecurity, assuming threats may come from internal and external sources.”

KnowBe4 enables your workforce to make smarter security decisions every day. Over 65,000 organizations worldwide trust the KnowBe4 platform to strengthen their security culture and reduce human risk.

Cryptopolitan has the story.


Free Phishing Security Test

Would your users fall for convincing phishing attacks? Take the first step now and find out before bad actors do. Plus, see how you stack up against your peers with phishing Industry Benchmarks. The Phish-prone percentage is usually higher than you expect and is great ammo to get budget.

PST ResultsHere's how it works:

  • Immediately start your test for up to 100 users (no need to talk to anyone)
  • Select from 20+ languages and customize the phishing test template based on your environment
  • Choose the landing page your users see after they click
  • Show users which red flags they missed, or a 404 page
  • Get a PDF emailed to you in 24 hours with your Phish-prone % and charts to share with management
  • See how your organization compares to others in your industry

Go Phishing Now!

PS: Don't like to click on redirected buttons? Cut & Paste this link in your browser:

https://www.knowbe4.com/phishing-security-test-offer



Subscribe to Our Blog


Comprehensive Anti-Phishing Guide




Get the latest about social engineering

Subscribe to CyberheistNews