FBI Warns Against Deepfakes' Potential for Social Engineering

Stu Sjouwerman | Mar 16, 2021

FBI Warns of DeepfakesThe FBI has issued an advisory warning of an expected increase in the use of deepfakes for social engineering attacks. Deepfakes are images, videos, audio, or text created via AI to produce extremely convincing imitations of real people.

“Malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months,” the advisory states. “Foreign actors are currently using synthetic content in their influence campaigns, and the FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spear phishing and social engineering in an evolution of cyber operational tradecraft.”

The FBI says threat actors will also incorporate deepfakes into sophisticated business email compromise scams.

“Synthetic content may also be used in a newly defined cyber attack vector referred to as Business Identity Compromise (BIC),” the FBI says. “BIC will represent an evolution in Business Email Compromise (BEC) tradecraft by leveraging advanced techniques and new tools. Whereas BEC primarily includes the compromise of corporate email accounts to conduct fraudulent financial activities, BIC will involve the use of content generation and manipulation tools to develop synthetic corporate personas or to create a sophisticated emulation of an existing employee. This emerging attack vector will likely have very significant financial and reputational impacts to victim businesses and organizations.”

Fortunately, many deepfakes aren’t perfect (at least not yet), and the Bureau offers the following advice to help recognize this content.

“Visual indicators such as distortions, warping, or inconsistencies in images and video may be an indicator of synthetic images, particularly in social media profile avatars,” the advisory says. “For example, distinct, consistent eye spacing and placement across a wide sample of synthetic images provides one indicator of synthetic content. Similar visual inconsistencies are typically present in synthetic video, often demonstrated by noticeable head and torso movements as well as syncing issues between face and lip movement, and any associated audio. Third-party research and forensic organizations, as well as some reputable cyber security companies, can aid in the identification and evaluation of suspected synthetic content.”

The FBI also recommends using the SIFT framework to help identify these attacks.

“Finally, familiarity with media resiliency frameworks like the SIFT methodology can help mitigate the impact of cyber and influence operations” the FBI says. “The SIFT methodology encourages individuals to Stop, Investigate the source, Find trusted coverage, and Trace the original content when consuming information online”

People have tended to worry about deepfakes because of their potential use in disinformation campaigns and influence operations. But as the FBI points out, they can’t be overlooked as a social engineering tool. New-school security awareness training can give your employees a healthy sense of suspicion so they can avoid falling for evolving social engineering tactics.

The FBI has the story.

Discover Your Organization’s Phish-prone™ Percentage

Ninety-one percent of data breaches begin with spear phishing. Launch our Free Phishing Security Test for up to 100 users to uncover your team's vulnerability and see how your security posture stacks up against industry benchmarks.

Get Your Free Phishing Security Test

Secure the Digital Workforce: Human + AI

KnowBe4 empowers the modern workforce to make smarter security decisions every day. Trusted by more than 70,000 organizations worldwide, KnowBe4 is the pioneer of digital workforce security, securing both AI agents and humans. The KnowBe4 Platform provides attack simulation and training, collaboration security, and agent security powered by AIDA (Artificial Intelligence Defense Agents) and a proprietary Risk Score. The platform leverages 15 years of behavioral data to combat advanced threats including social engineering, prompt injection, and shadow AI. By securing humans and agents, KnowBe4 leads the industry in workforce trust and defense.