New insights from cybersecurity artificial intelligence (AI) company Darktrace shows a 135% increase in novel social engineering attacks from Generative AI.
This new form of social engineering that contributed to this increase is much more sophisticated in nature, using linguistics techniques with increased text volume, punctuation, and sentence length to trick their victim. We've recently covered ChatGPT scams and other various AI scams, but this attack proves to be very different. In the latest findings, Darktrace Research identified a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email customers from January to February 2023, corresponding with the widespread adoption of ChatGPT.
Darktrace's study also shows that 82% of employees surveyed are concerned about cybercriminals using generative AI to create realistic scams. In a statement from Max Heinemeyer, Chief Product Officer of Darktrace, "The email threat landscape is evolving. For 30 years security teams have given employees training on spotting spelling mistakes, suspicious links, and attachments. While we always want to maintain a defense-in-depth strategy, there are increasing diminishing returns in the approach of entrusting employees with spotting malicious emails. In a time where readily-available technology allows us to rapidly create believable, personalized, novel and linguistically complex phishing emails, we find humans even more ill-equipped to verify the legitimacy of ‘bad’ emails than ever before. Defensive technology needs to keep pace with the changes in the email threat landscape, we have to arm organizations with AI that can do that."
Unfortunately, social engineering attacks are only going to get more and more sophisticated. With the help of AI new-school security awareness training is necessary for your users to learn about the latest cyber threats. Remember - your users are your organization's LAST line of defense!
BetaNews has the full story.