Game-Changer: Biometric-Stealing Malware



Evangelists-Roger GrimesI have been working in cybersecurity for a long time, since 1987, over 35 years. And, surprisingly to many readers/observers, I often say I have not seen anything new in the hacker/malware space since I began. The same threats that were a problem then are the same problems now.

Social engineering and unpatched software (and firmware) have long been the two biggest initial root causes for hacking…for decades. Other types of hacking such as malware, eavesdropping, password guessing/cracking, misconfiguration, insider attacks, injection attacks, side channel leaks, etc., have been around the whole time.

Sure, the communication channels where they are used and exploited have changed over time (e.g., social media, Slack, SMS, etc.), but how the attacks are crafted and performed has not really changed. 

Whenever someone announces something “new”, it reminds me of something I first read about in the 1980s. What is new is really old. It is just that the author of that new article or CEO of that new company was not born yet and is not a great student of cyber history.

But I must say that the recent revelation of a biometric-stealing malware program is a game-changer!

Not in the sense that it is malware that is stealing authentication secrets. That has been done for decades. But so far, those stolen authentication secrets have been isolated to passwords and multi-factor authentication (MFA) generated codes. Now, we have something very different. We have malware that steals biometric traits (e.g., fingerprints, faces, etc.). I am unclear as to whether the malware actually then uses those stolen traits in AUTOMATED account take over (ATO) attacks to do harm or whether a human is involved in that part of the hack, but the damage is done either way.

This means that biometric verifiers are forever weakened as a super strong authenticator and should probably never be used in single-factor authentication (1FA), especially in remote login scenarios, to protect valuable data and systems.

Biometrics Have Many Problems

One of the (many) problems with biometrics is that many of the involved traits (e.g., face, fingerprint, etc.) are not secrets. Your face, fingerprints, and even DNA, are easy to get, steal, and reuse. They have never been great 1FA verifiers. Second, any stolen biometric factor ends up being a forever problem for the legitimate holder.

How can any system, especially a remote system, ever know it is the real person logging on if an unauthorized third party has the other person’s biometric trait? And it is not like the legitimate biometric trait holder can easily change their biometric factor once stolen. What are you going to do, change your face, fingerprints, or irises? It’s possible, but not easy, and who wants to do that to mitigate a logon problem?

This has always been a problem. It was a demonstrated real-world big problem when Chinese hackers stole 5.6M fingerprints from the U.S. government in 2015. Anyone who had ever applied for a U.S. government security clearance and submitted the prints of their fingertips was in the stolen cache. It included normal people like me and my wife, and people working for the FBI, CIA, and NSA. But the U.S. government is not the only entity to blame for our biometric traits being stolen.

Stolen biometric credentials events have happened routinely over the years, each year. Here is an instance from 2019. Here are two reports on newer biometric leaks from 2023. 

And I am ignoring the other huge elephant in the room, and that is the fact that biometrics, the way they are captured and used, are not that accurate. They are not as accurate as claimed by most vendors and not nearly as accurate as most users believe. Your fingerprint may be unique in the world, but the way your fingerprint is captured, stored, and reused by a biometric solution certainly is not. 

The inaccuracy of biometric authentication is not necessarily a bad thing. Sometimes weak accuracy is accurate enough. For example, cell phone fingerprint readers are among the most inaccurate of all general biometric authentication solutions. Still, I use my fingerprint to open my cell phone. I am not trying to protect my phone from James Bond-style attackers trying to compromise my employer’s greatest nuclear secrets (or my bank account). Nope.

All my fingerprint is doing is allowing me to quickly logon and protecting my phone against easy unauthorized access if a typical cell phone thief finds or takes my phone. There is a good chance the fingerprint reader will be good enough to stop their crude attempts to log into my phone. Criminals seeing the fingerprint request will usually just do a hard reset or wipe the phone before they send it to another country for resale. That is really all the fingerprint reader logon solution was designed for. It could easily be made more accurate, but that would result in far more problems logging on for the legitimate users trying to use it. So, vendors intentionally make it more inaccurate to lessen legitimate user inconvenience. And most users don’t know and wouldn’t care if they did know.  

If you are interested more in this discussion, see my one-hour webinar on hacking biometrics here

Biometric attacks have been around forever, since the beginning of biometric solutions. But before this moment in time, all were manual attacks involving one or more human beings. And there really was no incentive to try and automate it. Remote biometric authentication has not gone mainstream until the last few years. But now, many sites and services are starting to allow or require biometric authentication. Attackers, if they want to be successful against those types of services, have to step up their game. And they have.

Why do something twice when you can automate it? Malicious programmers like to take a bad, devious thing and automate it whenever they can. It allows a malicious technique to go from being used very slowly, hacker-by-hacker, one at a time against one victim at a time to something automated that can easily impact tens of millions of people at once. It is what we call “weaponization” in the computer security world.

Since most people still use passwords, password-stealing malware is one of the most popular types of malware programs on the planet. Password-stealing malware programs have stolen tens to hundreds of millions of passwords. As people have started to use multifactor authentication (MFA) more and more, many of those password-stealing malware programs have morphed into MFA-stealing malware programs. Today, many/most password-stealing malware programs are also MFA-stealing malware programs. Here is an example.

Biometric Malware

Until recently, I had not heard of malware that stole (and possibly used) biometrics. Now I have. The world has changed.

On February 15th, Group-IB Asia-Pac Threat Intelligence team stated, “…a new iOS Trojan [called GoldPickaxe.iOS] designed to steal users’ facial recognition data, identity documents, and intercept SMS.” It was created by a well-known advanced Chinese bank-stealing trojan maker known as GoldFactory. It captures the victim’s face image and then uses AI-face-swapping services to create future deepfake images of the victims to be used to log into their bank accounts. That is it. The old world as we know it is over. Now we have to worry about malware stealing and recreating our faces.

Although the first and only facial biometric trojan I am aware of currently only targets Asia-Pacific victims and banks, clearly biometric trojans will go more international as needed. The hardest part, of creation and use, is already done. The GoldPickaxe family of trojans already targets Android phones as well.

As with most mobile malware programs, social engineering is the primary delivery method. In this case, the GoldPickaxe.iOS trojan posed as both an Apple test platform program (and was subsequently removed by Apple) and as a Mobile Device Manager (MDM) “profile”. It then collected face profiles, ID documents, and intercepts SMS messages from the victim’s mobile devices. 

Note: It is not clear from what I can currently read, how or when the facial information is stolen, and that is an unimportant unknown in my understanding.

It uses the stolen face profiles with AI-enabled deepfake services (there are hundreds) to generate future-use victim faces. Asian banks, including the Bank of Thailand and State Bank of Vietnam, require users to use facial recognition to withdraw or transfer large sums of money from their account. Here are more details.  

Biometric Deepfake Attack

It is good to understand how captured facial recognition data can be used in a biometric attack. Traditional facial recognition attacks require that the attacker get a picture or digital data surrounding a particular biometric trait. The attacker then recreates the biometric trait and reuses it during the attempted authentication event. For example, recreating a user’s face and holding it up to a camera when the site asks for the user’s face to verify the transaction. Traditional biometric attacks have varying levels of success, but it is the way most biometric attackers did the attacks over the last two decades (until recently). Traditional biometric attacks are not guaranteed to be successful, and it does not scale. 

But if the attacker can gain access to the biometric data (in clear form), they essentially become that victim to the sites and services requesting the biometric attribute to be used. They can either capture the biometric data as it is captured by the user’s device, capture it as it is used by the real victim on their legitimate device (essentially performing an adversary-in-the-middle attack on the biometric solution being used), or the hacker can simply copy the biometric attribute from where it is stored (at-rest). Either works just fine. The end goal is to get a copy of the user’s biometric trait, however, that is accomplished. 

Once the attacker has the needed biometric attribute, they can replay it when needed to fake out an authentication system. Sometimes, all that is needed to fool the biometric authentication system is a picture of the biometric attribute (e.g., face, fingerprint, etc.). Other times, “liveness detection” used in the authentication solution requires that the submitted biometric sample have current attributes that would be likely associated with a real, live person (e.g., eyelids winking, blood moving through veins, detected temperatures, liquid surface areas, changing skin tones, changing voice volume, say particular words, etc.). 

In these cases, the hackers have to take the captured, static, biometric attribute and make it seem alive, whatever that means for the targeted authentication system. That is where AI-enabled deepfake services come in. 

Malicious Deepfake Attacks

iProov Threat Intelligence services discuss malicious “face swap injection attacks” in more detail here, and have been tracking similar threats at least since 2022. An attacker who is able to capture a biometric attribute and use AI-enabled deepfake services can essentially create a “living synthetic” identity of the victim. 

iProov has found over 60 groups and 110 different face swapping tools. They state, “The most common face swap tools being used offensively are currently SwapFace, followed by DeepFaceLive and Swapstream.” Most of the members of those groups and users of those tools are just regular users, but many are malicious. 

A hacker with a stolen biometric identity or a newly created biometric synthetic identity needs a few other things to pull off their biometric identity attack. First, unless they are using the victim’s real device in the future authentication attempt, they need a device “emulator”. An emulator fakes being the user’s device (e.g., computer or mobile phone).

The attacker will steal the necessary device identifying information from the original victim so that the emulator will appear as the closest to the victim’s real device to the legitimate site/service they are trying to log into. Sites often track multiple device attributes of their legitimate users, so if something is off, they will deny the “easy” login method. Attackers look for and steal user device “metadata” whenever possible. This stolen device’s information is fed into the emulator. 

Then, the attacker might use a “virtual camera” that simulates the victim’s real device camera. The malicious software camera allows for identity injection versus capturing an image and displaying it in real-time. The attacker then injects the stolen biometric attribute or generated synthetic biometric attribute during the login portion. And as long as the legitimate site/service does not detect all the fakeness (and most do not), the biometric authentication succeeds. 

iProov claims “We observed an increase in face swap injection attacks of 704% H2 over H1 2023.”

I am greatly simplifying these attacks and not covering a dozen other similar types of biometric attacks, but you get the idea. Biometric attacks are here and not leaving. Biometric malware is here and not leaving. 

Note: Most of this article has focused on facial image theft. Any biometric attribute could be similarly used and abused.

I do not have enough detail on the first biometric malware attack to determine if the malware is used to both steal AND USE the biometric attribute information, but my best guess is this first generation malware program just steals the biometric (and device metadata information). The resulting theft part probably requires one or more humans to be involved in using AI-enabled deepfake tools to create and use the new synthetic identities.

This is likely especially true because all the deepfake AI tools usually require multiple rounds of trying to get an acceptable fake identity. But over time, the AI deepfake tools will become better and the malware will likely also be used to automate the resulting monetary theft attack. It will be seconds from the initial compromise to the resulting monetary theft. It is the natural evolution of malware. 

In light of this new revelation, I am not sure how any service using/requiring remote single-factor biometric authentication can be trusted any more than a service that allows login names and passwords as the sole authentication factor. I certainly would not trust 1FA biometric solutions to protect my most valuable data and systems.

Note: I am not against someone using 1FA biometrics when required to be in person. In general, most thieves would not take the chance of showing up in person to try and do a successful biometric attack. And it does not scale. In-person biometric attacks are a thing of nation-states and super serious corporate spies. It is a problem, and I’ve seen it in the real world, but it is not a problem for most people. 

Some might ask that since I am equating remote 1FA biometric factors with the very common use of login names and password solutions, why am I picking on remote 1FA biometric systems (since most of the world runs on 1FA remote passwords)?

Very simple. We can change our passwords. 

You cannot change your biometric attributes. Once your biometric trait is captured by a malicious party, it is game over for that attribute. How can any remote system ever trust them again? It is this one fact that makes them worse than using traditional login names and passwords. One biometric compromise and you are done forever using that biometric attribute? Do I start keeping a chart of which of my biometric attributes have been knowingly compromised (“Well, I think my retina scan got compromised from my doctor’s office visit last May, and I think…”?).

Defenses

So, what are your defenses?

First and best is education. Share with your management, IT Security team, and end users about the possibility of real-world biometric attacks. Even if you do not use biometric authentication at work, your coworkers, friends, or family might use them personally somewhere else, and sharing is caring. Make people aware of the types of possible biometric attacks and how to defend against them. 

The key is to avoid being socially engineered into allowing your biometric attribute-storing device to be compromised in the first place. The phishing emails that try to get your biometric attributes look mostly like the standard phishing attacks. Keep your coworkers, family, and friends resilient against most phishing attacks and you keep them resilient against biometric-stealing attacks. That is step one and the most important one. It cannot be overstated. Ignore this advice at your peril. 

All vendors allowing remote biometric authentication should require another factor of authentication (i.e., MFA). 1FA biometric remote authentication should not be allowed to protect valuable data and systems. Any system allowing or using biometric attributes as part of their authentication solution should require a separate physical authentication factor (such as a FIDO2 key). This is not just me saying it. NIST’s current recommendation on the most recently released version of their Digital Identities Guidelines states that biometric factors should always be paired with a “physical authenticator”.  Hmm. Hmm. I wonder why NIST said that.

Vendors who store biometric attributes should make them extremely hard to steal. Apple stores fingerprint and face data in their device’s Secure Enclave chip (https://support.apple.com/guide/security/face-id-and-touch-id-security-sec067eb0c9e/). Well, somehow GoldPickaxe.iOS is apparently getting to the biometric data. Perhaps they are capturing it before it is stored in the Secure Enclave or as it is being used. I do not know. But some part of the process is not being protected enough.

Lastly, all vendors who collect and store biometric attributes should store those factors in such a way that them being stolen will be useless to the thief who steals them. Certainly, people’s whole fingerprints, faces, voices, and eye scan data should not be stored. That is craziness! Instead, they should be transmogrified into something that the device storing and using the attribute can recognize, but becomes useless and un-reconstitutional to any thief using it on another device. I previously covered this advice in more detail here.

One example is to take a fingerprint and capture/map particular points that are related to the fingerprint, so it can be reconstituted when needed. You end up with something that looks like a star constellation that represents the fingerprint. Take only the points you need. Turn the points into coordinates. Then, cryptographically hash the coordinates. Store only the cryptographic hash (or hashes).

And when biometric comparison is needed, perform the same steps on the newly submitted biometric traits and compare hashes. I am, again, overly simplifying. But the idea is to capture, store, and use biometric data in a way so that if it is captured by an unauthorized party, it becomes useless off the device. It can still possibly be used on the device, but it takes away a large percentage of biometric attacks.

If you thought biometric authentication was the be-all-end-all in authentication, this should serve as your wake-up call. We now have biometric-stealing malware and the world will never be the same. The first defense is awareness and education. 


12 Ways to Defeat Multi-Factor Authentication On-Demand Webinar

Webinars19Roger A. Grimes, KnowBe4's Data-Driven Defense Evangelist, explores 12 ways hackers use social engineering to trick your users into revealing sensitive data or enabling malicious code to run. Plus, he shares a hacking demo by KnowBe4's Chief Hacking Officer, Kevin Mitnick.

Watch the Webinar

PS: Don't like to click on redirected buttons? Cut & Paste this link in your browser:

https://info.knowbe4.com/webinar-12-ways-to-defeat-mfa



Subscribe to Our Blog


Comprehensive Anti-Phishing Guide




Get the latest about social engineering

Subscribe to CyberheistNews