Researchers at Check Point have shown that Large Language Models (LLMs) like OpenAI’s ChatGPT can be used to generate entire infection chains, beginning with a spear phishing email. The publicly available AI can be asked to write a targeted phishing email with perfect grammar. The researchers generated two emails, one of which directed the recipient to click on a link. The other email asked the user to download a malicious document.
“Note that while OpenAI mentions that this content might violate its content policy, its output provides a great start,” the researchers write. “In further interaction with ChatGPT we can clarify our requirements: to avoid hosting an additional phishing infrastructure we want the target to simply download an Excel document. Simply asking ChatGPT to iterate again produces an excellent phishing email.”
Check Point then used another AI platform, Codex, to write a working malicious macro that could be embedded in an Office document and used to download a reverse shell on the compromised machine.
Check Point notes that the AI is a neutral platform, and OpenAI has done extensive work to prevent it from being used for malicious purposes. The researchers conclude, however, that the platform can be abused to lower the bar for aspiring cybercriminals to launch phishing campaigns.
“[T]his is just an elementary showcase of the impact of AI research on cybersecurity. Multiple scripts can be generated easily, with slight variations using different wordings,” the researchers write. “Complicated attack processes can also be automated as well, using the LLMs APIs to generate other malicious artifacts. Defenders and threat hunters should be vigilant and cautious about adopting this technology quickly, otherwise, our community will be one step behind the attackers.”
New-school security awareness training can help your employees thwart social engineering attacks.
Check Point has the story.