Will AI and Deepfakes Weaken Biometric MFA

Roger AI and DeepfakesYou should use phishing-resistant multi-factor authentication (MFA) when you can to protect valuable data and systems. But most biometrics and MFA are not as strong as touted and much of it can easily be hacked and bypassed. It doesn’t necessarily mean you shouldn’t use it, just pick strong, more trustworthy implementations and don’t ever think they can’t be hacked.

There is a growing number of applications and services using biometric identity and authentication based on facial recognition and voice. Many phones and computing devices now accept facial identification as a way to access the device and many organization’s technical support centers use voice-recognition as a way to identify or authenticate customers. This isn’t a bad thing, but in general, both facial and voice-recognition systems aren’t as reliable as their vendors tout.

Watch Hacking Biometrics: If You Thought Your Fingerprints Were Safe, Think Again!

Both facial- and voice-recognition at scale suffers from a high rate of false-positives and false-negatives. The vendors often brag about nearly inconsequential error rates at 1 in 10,000 or 1 in 100,000, but independent testing, such as this one by the US National Institutes of Standards & Technology (NIST) reveals far higher error rates. For example, facial recognition error rates from 1 in 2 (worst) to 1 in 100 (best). A 1% error rate isn’t bad, but it is high for computer authentication. And if facial recognition really is far more accurate than US government testing is revealing, why aren’t those more accurate vendors submitting their products to US government testers and blowing away their so far weak results?

Here are some other NIST links around facial recognition accuracy testing:

Voice-recognition systems, in general, perform even worse than facial-recognition systems. In the largest known public testing of multiple voice-recognition systems, although there were some top performers, most voice-recognition applications had error rates around 9% to 16%. Some voice-recognition systems were in the single digits of accuracy.

Here’s a YouTube video showing a very average demo of someone faking out a real world voice-recognition system. You can click on any of the other recommended related video links on voice-recognition to see dozens of other similar hacking successes. Let’s just say it isn’t rare for someone to defeat a voice-recognition system.

Here are some other links about voice-recognition system accuracy:

This isn’t to say that facial- and voice-recognition systems can’t be trusted for identity and authentication. There are good and bad performing systems. If you must have strong security, make sure the system you are using or considering is one of the better performers. But don’t listen to the vendor’s marketing attestations. Ask or look for independent testing. You want to make sure you’re choosing a very accurate solution and not just a vendor claiming they have good accuracy.

Even poor performers can be OK for their usage. For example, I know that the facial recognition reader on my phone isn’t all that accurate, but it works perfect for me. I use it to access my phone and a common thief stealing my phone isn’t likely to bypass it. And voice-recognition technology may not be overly accurate, in general, but when paired with other verification factors (such as verifying the caller’s phone number and asking for other personal information), it may work well enough for the application. Just because a facial- or voice-recognition system isn’t great doesn’t mean it can’t work well enough for the scenario it’s being deployed in. Just realize that overall, facial and voice-recognition systems can range from weak to strong, and it can’t hurt to pick and use a strong one.

How Will AI and Deepfakes Impact Biometrics?

A common question is how Artificial Intelligence (AI) and Deepfakes will impact biometrics and MFA? I think it’s a safe bet that ever improving AI and deepfake applications will further diminish the accuracy of facial- and voice-recognition systems, at least as they work today. In fact, if the facial- and voice-recognition system vendors didn’t respond, those industries and products would significantly decline. But they are certainly inventing and improving mitigations to fight back!

As AI and deepfake technologies improve, there is a whole new industry developing to help better detect fraudulent and unapproved use of AI and deepfakes. Like cybersecurity in general, the defense is always playing catch-up to the aggressors. But defenders will get better at defeating really good, fraudulently used AI and deepfakes. You won’t see the industries of facial- and voice-recognition systems simply disappear without a fight.

Plus, if you pair facial- and voice-recognition systems with other authentication factors, as discussed above, the additional, non-biometric factors, will help defeat threat actors who only have the biometric attribute faked. Perhaps the non-biometric factor needs to be something more than rote repeating of some well known fact (such as social security number, home address, etc.). I like it when vendors ask me questions that only I could have known by already being in that system, such as:

  • What was your last stock trade?
  • What online game did you last play?
  • How much was your last payment?

Attackers can learn the answer to those questions, but they are a bit harder to answer than something anyone can learn by Googling my name.

In the short-term, I think existing facial- and voice-recognition systems will be greatly challenged by the ever improving AI and deepfake technologies. Some existing vendors, without enough resources or vision, will fall out of the industry. Other vendors will excel at the presented challenges and risk to become market leaders. Your challenge, then, just as today, is to make sure you pick great performers. There are always good and bad performers and the badly performing vendors don’t go out of their way to educate you about their relative performance. So, do a little research before picking a vendor.

Also, recognize that most cybersecurity attacks occur because of someone being tricked by social engineering into doing something they shouldn’t have. Biometrics and MFA don’t change that equation much. Most of our cybersecurity problems begin and end with humans. They can’t be solved only with technology. A little education (and new-school security awareness training) goes a long way.

12 Ways to Defeat Multi-Factor Authentication On-Demand Webinar

Webinars19Roger A. Grimes, KnowBe4's Data-Driven Defense Evangelist, explores 12 ways hackers use social engineering to trick your users into revealing sensitive data or enabling malicious code to run. Plus, he shares a hacking demo by KnowBe4's Chief Hacking Officer, Kevin Mitnick.

Watch the Webinar

PS: Don't like to click on redirected buttons? Cut & Paste this link in your browser:


Subscribe to Our Blog

Comprehensive Anti-Phishing Guide

Get the latest about social engineering

Subscribe to CyberheistNews