As generative AI evolves and becomes a mainstream part of cyber attacks, new data reveals that deepfakes are leading the way.
Deepfake technology has been around for a number of years, but the AI boom has sparked new attacks, campaigns, and players all trying to use the impersonation technology to rob victims of their credentials, personal details or money.
We recently covered multiple deepfake campaigns all perpetrated by a single individual that reached a global level. AI and automation only enable this kind of scale and make it a possible reality for scammers everywhere.
According to Ironscale’s latest report, Deepfakes: Is Your Organization Ready for the Next Cybersecurity Threat?, 75% of organizations have experienced at least one deepfake-related incident within the last 12 months. And 60% of organizations are only ‘somewhat confident’ or ‘not confident’ at all in their organization’s ability to defend against deepfake threats. Given the extent at which deepfake-related incidents are occurring, it’s imperative that organizations know where to focus their defenses.
According to the report, 39% of organizations cited incidents coming in the form of personalized phishing emails – a practical medium, given that impersonation of email addresses, sender names, and brands can all be imitated. So deepfakes would fit right in.
And because email is such a material medium for deepfakes, it’s critical for recipients to spot suspicious and/or malicious emails well before engaging with deepfaked audio or video via new-school security awareness training.
KnowBe4 empowers your workforce to make smarter security decisions every day. Over 70,000 organizations worldwide trust the KnowBe4 platform to strengthen their security culture and reduce human risk.