Jai Vijayan, Contributing Writer at Dark Reading correctly stated: "It's time to dispel notions of deepfakes as an emergent threat. All the pieces for widespread attacks are in place and readily available to cybercriminals, even unsophisticated ones."
The article starts with a conclusion that is hard to get around. "Malicious campaigns involving the use of deepfake technologies are a lot closer than many might assume. Furthermore, mitigation and detection of them are hard."
A new study of the use and abuse of deepfakes by cybercriminals shows that all the needed elements for widespread use of the technology are in place and readily available in underground markets and open forums. The study by Trend Micro shows that many deepfake-enabled phishing, business email compromise (BEC), and promotional scams are already happening and are quickly reshaping the threat landscape.
No Longer a Hypothetical Threat
"From hypothetical and proof-of-concept threats, [deepfake-enabled attacks] have moved to the stage where non-mature criminals are capable of using such technologies," says Vladimir Kropotov, security researcher with Trend Micro and the main author of a report on the topic that the security vendor released this week.
Ready Availability of Tools
One of the main takeaways from Trend Micro's study is the ready availability of tools, images, and videos for generating deepfakes. The security vendor found, for example, that multiple forums, including GitHub, offer source code for developing deepfakes to anyone who wants it.
In many discussion groups, Trend Micro found users actively discussing ways to use deepfakes to bypass banking and other account verification controls — especially those involving video and face-to-face verification methods.
Deepfake Detection Now Harder
Meanwhile on the detection front, developments in technologies such as AI-based Generative Adversarial Networks (GANs) have made deepfake detection harder. "That means we can't rely on content containing 'artifact' clues that there has been alteration," says Lou Steinberg, co-founder and managing partner at CTM Insights.
RELATED READING: The FBI Warns Against A New Cyber Attack Vector Called Business Identity Compromise (BIC) & Top 5 Deepfake Defenses https://blog.knowbe4.com/deepfake-defense
RELATED READING: Trend Micro Reports Stolen Identities And Deepfakes https://blog.knowbe4.com/trend-micro-reports-stolen-identities-and-deepfakes
Three Broad Threat Categories
Steinberg says deepfake threats fall into three broad categories.
- The first is disinformation campaigns mostly involving edits to legitimate content to change the meaning. As an example, Steinberg points to nation-state actors using fake news images and videos on social media or inserting someone into a photo that wasn't present originally — something that is often used for things like implied product endorsements or revenge porn.
- Another category involves subtle changes to images, logos, and other content to bypass automated detection tools such as those used to detect knockoff product logos, images used in phishing campaigns or even tools for detecting child pornography.
- The third category involves synthetic or composite deepfakes that are derived from a collection of originals to create something completely new, Steinberg says.
Full DARKReading article here with links to numerous sources and examples: https://www.darkreading.com/threat-intelligence/threat-landscape-deepfake-cyberattacks-are-here