The FBI warns that synthetic content may be used in a “newly defined cyber attack vector” called Business Identity Compromise (BIC)
Imagine you're on a conference call with your colleagues. Discussing the latest Sales numbers. Information that your competitors would love to get a hold of.
All of a sudden, your colleague Steve’s image flickers somewhat. It draws you attention. And when you look at it, you notice something odd. Steve’s image doesn’t look exactly right. It looks like Steve, it sounds like him, but something appears to be off about him. Upon a closer look you see that the area around his face looks like it is shimmering and the lines appear blurry.
You write it off as a technical glitch and continue the meeting as normal. Only to find out a week later that your organization suffered a data leak and the information you discussed during the meeting is now in the hands of your biggest competitor.
Ok, granted, this sounds like a plot from a bad Hollywood movie. But with today’s advancements in technology like artificial intelligence and deepfakes, it could actually happen.
Deepfakes (a blend of “deep learning” and “fake”) can be videos, images, or audio. They are created by an artificial intelligence through a complex machine learning algorithm. This deep learning technique called Generative Adversarial Networks (GAN) is used to superimpose synthesized content over real ones or create entirely new highly realistic content. And with the increasing sophistication of GANs, deepfakes can be incredibly realistic and convincing. Designed to deceive their audience, they are often used by bad actors to be used in cyber attacks, fraud, extortion, and other scams.
Mind you, deepfakes also have more positive applications. Like this video of President of Obama which was created to warn viewers about fake news online. Or this one of Mark Zuckerberg created to bring awareness to Facebook’s lack of action in removing deepfakes from its platform.
The technology has been around for a couple of years and was already used to create fake graphic content featuring famous celebrities. Initially it was a complicated endeavor to create a deepfake. You needed hours and hours of existing material. But it has now advanced to the point where everyone, without much technical knowledge, can use it. Anyone with a powerful computer can use programs like DeepFaceLive and NVIDIA’s Maxine to fake their identity in real time. And for audio you can use programs like Adobe VoCo (popularized back in 2016), which is capable of imitating someone’s voice very well. This means that you can go on a Zoom or Teams meeting and look and sound like almost anyone. Install the program, configure it and you are done. Choose any of the pre-generated identities or input one you created yourself and you are good to go. It really is that simple.
That is one of the reasons organizations are so wary of deepfakes. The ease of use. Combine that with the realistic content and it can become scary, very fast. How would you like it if a scammer used your identity in a deepfake? In today’s digital age where business is just as easily done though a phone or video call, who can you trust?
And this is one of the fundamental dangers of deepfakes. When used in an enhanced social engineering attack, they are intended to instill a level of trust in the victim. It is because of this danger that the FBI has a sent out a Public Service Announcement and issued a warning about the rising threat of synthetic content, even going as far as dubbing the attacks with a new name: Business Identity Compromise (BIC).
So, what can you do to protect yourself from deepfakes? Can you actually defend against a form of attack that is specifically designed to fool us? Yes, you can, but with the pace of the advances in the technology, it isn’t easy. Things that are designed to fool your senses, generally succeed. But there are indicators you can look out for to recognize a deepfake:
RELATED READING: Reshaping the Threat Landscape: Deepfake Cyberattacks Are Here
RELATED READING: Trend Micro Reports Stolen Identities And Deepfakes
1) Identify Deepfakes
Deepfakes can be very well made, but often still display some defects or distortions, warping or other inconsistencies. These indicators can be consistent eye spacing (eyes are hard to do well), and strange looking hair (equally hard) especially around the edges. You can also watch for syncing inconsistencies between the lip, audio, and face movement.
Lighting problems are also a good giveaway for a deepfake. Consider whether the lighting and shadows look realistic. If the material is a video, consider slowing down or pausing in certain spots. This might help you spot a deepfake more easily.
Another way to identify a deepfake is to consider the source. Where was it posted? And is this a reliable source that has vetted the material before putting it online?
2) Train Yourself
Security Awareness Training is a must have in any good security program. If you don’t train people to detect threats and provide them with training on the best response, how else are you going to shape the right security behavior in people?
But with deepfakes being such a new form of attack, and many people still unaware of them, it is even more important to get up to speed quickly. There are technologies that help organizations identify deepfakes. But it is still early, and they are expensive and can often only be used to identify deepfakes among a set of existing media. Making them unsuited for real time communications like Zoom or Teams. Tools that a modern workforce uses everyday.
3) Security Best Practices and Zero Trust
In security a successful rule is to verify things you don’t trust. Examples of this include asking questions to someone you don’t trust on a conference call. Or using the digital fingerprint or watermarks on images.
Verification procedures are a very powerful way to defend against deepfakes. Which ones you use depends on the security requirement of an organization. But whichever procedure you use, make sure to test it regularly. And when you do spot a deepfake, always inform your organization and security team about it. It might just be that you are not the only one bad actors are trying to fool.
And remember, trust is a fundamental requirement to interact. So don’t overdo it and become distrusting of everything. Be mindful of the signs and if you spot them, act accordingly.
Another best practice focuses on making conference calls private. Make sure all video, conference calls and webinars are (at least) password protected to ensure that only trusted individuals have access to them.
4) Understand the Threat
Deepfakes are not only used in video. These might probably be the most well-known application because of Hollywood blockbusters like The Irishman that employ this technology. But understand that the technology allows bad actors to use voice deepfakes to scam you as well. Deepfakes are a multi-facet technology which has many applications.
5) Don’t Give Them any Ammunition
To create a deepfake, you need existing content of a victim. And with our desire to share just about every little aspect of our personal and work lives on social media, we are making it very easy for them. Limit your public presence on social media. Don’t make it easy for a bad actor to recreate your likeliness or steal your voice based on publicly available data.
Even though the technology behind deepfakes is advancing, they are still in the early stages as an attack vector. This gives us time to prepare. But one thing is certain. As time moves forward, we’ll see deepfakes being used by bad actors as tools to fool and scam people more often. They simply are a threat you cannot afford to ignore.
RELATED READING: Reshaping the Threat Landscape: Deepfake Cyberattacks Are Here