Phishing Scammers are Using Artificial Intelligence To Create Perfect Emails

Stu Sjouwerman | Sep 12, 2023

Phishing Scammers Use AIPhishing attacks have always been detected through broken English, but now generative artificial intelligence (AI) tools are eliminating all those red flags. OpenAI ChatGPT, for instance, can fix spelling mistakes, odd grammar, and other errors that are common in phishing emails.

This advancement in AI technology has made it easier for even amateur hackers to analyze vast amounts of publicly available data about their targets and create highly personalized and convincing emails within seconds. These emails can be tailored to mimic the writing style of the target's loved ones or friends, making them difficult to distinguish from legitimate communication.

Abnormal Security, an email security company, observed phishing attacks using generative AI platforms. These emails are perfectly crafted and look legitimate, making them tricky to detect at first glance. The power of generative AI lies in its ability to scrape the web for personal information about a person and use it to tailor tempting emails.

While ChatGPT and similar models have built-in protections against creating malicious content, many open-source large language models lack safeguards. Hackers can license models capable of generating malware and sell them on darknet forums.

The future of AI-powered attacks is a growing concern for cybersecurity experts. AI technology has been used to create deepfakes and simulate speech, making hybrid attacks involving email, voice, and video an approaching reality. The true threat lies in AI's potential to conceive new attack methods that current systems are unable to detect.

To stay ahead of the game, some cybersecurity companies are using proprietary large language models to generate phishing emails for security awareness training. Defensive AI systems will be crucial in combating AI-powered attacks, but the challenge lies in AI's ability to generate convincing attacks at scale.

As the world becomes increasingly reliant on generative AI, corporate security practices must adapt. Improving employee training and awareness on phishing is essential, and networks should be carefully segregated to mitigate potential damage caused by hackers.

Generative AI has undoubtedly transformed the phishing scene, but it has also compelled cybersecurity companies to integrate AI into their defense strategies. The battle against AI-powered attacks will persist as organizations strive to keep up with the evolving threat.

James Rundle has the full story in the Wall Street Journal.

Discover Your Organization’s Phish-prone™ Percentage

Ninety-one percent of data breaches begin with spear phishing. Launch our Free Phishing Security Test for up to 100 users to uncover your team's vulnerability and see how your security posture stacks up against industry benchmarks.

Get Your Free Phishing Security Test

Secure the Digital Workforce: Human + AI

KnowBe4 empowers the modern workforce to make smarter security decisions every day. Trusted by more than 70,000 organizations worldwide, KnowBe4 is the pioneer of digital workforce security, securing both AI agents and humans. The KnowBe4 Platform provides attack simulation and training, collaboration security, and agent security powered by AIDA (Artificial Intelligence Defense Agents) and a proprietary Risk Score. The platform leverages 15 years of behavioral data to combat advanced threats including social engineering, prompt injection, and shadow AI. By securing humans and agents, KnowBe4 leads the industry in workforce trust and defense.