The use of Large Language Models (LLMs) is the fine tuning AI engines like ChatGPT need to focus the scam email output to only effective content that results in a wave of new email scams.
I recently wrote about how AI tools like ChatGPT will revolutionize the content used in phishing emails. In short, gone are the days of poorly written scam emails because ChatGPT wrote them. This single effort addressed the bottleneck in any threat groups phishing activities – the writing of the persuasive email designed to elicit a response from the potential victim. With well-written influential emails comes larger percentages of tricked victims. But the challenge with ChatGPT is it’s not perfect. Even an AI engine can spout out nonsense, and with scammers often being non-native speakers to those they are attacking, the possibility exists that even a ChatGPT-created email can fail.
Enter LLM.
TechTarget defines Large Language Models as “a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate and predict new content.” Facebook recently had their LLM leaked online. These LLMs are compact enough, the entire AI can run on a single laptop. And when focus is placed on writing compelling phishing emails, the likelihood that users will fall prey to the phishing content increases, leaving the attackers asking “ChatGPT who?."
These kinds of advancements will quickly become commonplace for phishing scammers, making it absolutely necessary to elevate the state of your user’s vigilance when they interact with email and the web; literally any content that seems even the slightest bit suspect or out of the norm will need to be treated as hostile until proven otherwise – something already native to those that undergo Security Awareness Training.