A recent notification from the FBI warns cybersecurity professionals to be on the lookout for deepfake content that will be used for cyberattacks and foreign influence campaigns.
I’ve written about just how good deepfakes are getting with a story about a deepfake social media account openly pretending to be Tom Cruise (meant as satire, in this case). Last month, the FBI published a Private Industry Notification about the pending threat of deepfake-based attacks.
I like that the FBI refers to it as the use of “malicious synthetic content” (which they define as “the broad spectrum of generated or manipulated digital content, which includes images, video, audio, and text.”) as it puts the concept of “deepfake” into context.
The FBI warns of Russian, Chinese, and Chinese-language threat actors engaging in foreign influence campaigns and the creation of “fictitious journalists” (which I can only interpret as they’ve used deepfake technology to create the image of a person that doesn’t actually exist).
The use of deepfakes has implications in impersonation attacks, scams targeting CEO's, and more. So, it’s important for organizations to educate employees using Security Awareness Training to be on the lookout for literally any request that seems out of context and establish policy that the request should be verified out of band from the original request.