The most basic use of tools like ChatGPT to script out professional-looking emails has all but eliminated improperly written content as an indicator of a potential phishing scam.
A recent article interviewing Brian Schnese, a senior risk consultant at insurance broker Hub International highlighted the transition from the misuse of generative AI as a means of crafting well-written phishing emails as a hypothetical to one that is now fully commonplace. “We’re already seeing it," Schnese commented.
“We’re in a world today where I can go to ChatGPT and type in, ‘please craft a request to my vendor asking them to change my wiring instructions,’ and it spits out a perfect request. [I can then go] back to ChatGPT and say, ‘please add a sense of urgency and stress the confidential nature of this transaction’ –and again, instantly, it’s perfect.”
This matched with the use of no-code automation to craft content means that cybercriminals can easily create spear phishing campaigns based on industry, victim role, and more, fine-tuning an attack without the need to do any manual work.
In other words, you can no longer assume the emails used for spear phishing attacks will have telltale signs – such as misspellings, poor grammar, etc. – that they are fake.
It also means that organizations are going to need to work to specifically elevate the vigilance of those users that interact with the company’s financials – something accomplished through continual security awareness training - so they are less likely to gloss over a phishing email and treat it as legitimate.
KnowBe4 enables your workforce to make smarter security decisions every day. Over 65,000 organizations worldwide trust the KnowBe4 platform to strengthen their security culture and reduce human risk.