I have helped people detect romance scams for decades. It is still very common for romance scammers to leverage both pictures of celebrities and pictures of innocent, everyday people as part of these scams.
I have always been amazed by people’s ability to think that some famous celebrity is not only in love with them but somehow needs the victim’s money to escape their current entanglements to begin life anew with the victim.
In particular, I remember one woman who told me the famous Greek composer and musician Yanni was in love with her. Yanni told her that he just needed her money so that he could divorce his wife Linda Evans and marry her. When I told her that Yanni never married Linda Evans, which was something she could easily confirm, she broke off communications with me and continued to send “Yanni” money until she had no more money to send.
“The heart has a mind which the mind knows nothing of.”
One of the methods I would use to convince victims that they were not really dealing with who they thought they were dealing with was to tell the victim to ask the scammer for a photo of the purported person doing or holding something that would be difficult for the scammer to find or create.
For example, ask Yanni to get into that beloved convertible of his (that he had been bragging about to the victim), and tell him to stick his head out the window or poke his finger into the ceiling of the car’s roof and take a picture. Or hold up today’s newspaper in the photo (that is aging my advice). I would make up some scenario that would be easy for the real person to do, but difficult for a scammer to quickly create.
Usually, the scammer would just refuse to send the requested photo, start appealing to the victim’s growing sense of mistrust (i.e., “How can you not believe that it is me!”). Or the scammer would sense the jig was up and just stop communicating with the skeptical victim.
But over time, my advice became less effective as many of these scammers became excellent Adobe Photoshop users. I began to be amazed about how quickly the scammer could put together whatever strange combination the victim asked for. When the scammer provided the one-off photo from the scammer, and the scammer quickly provided it, it just proved to the victim that I was wrong, and they even hated me for suggesting that their true love was a scammer.
Today’s AI-enabled deepfakes have made romance scams far easier to pull off. As I covered in this article, it takes only a few minutes for anyone to create a realistic-looking deepfake picture, video, or audio of anyone saying and doing anything. I stated, “…it will take you longer to create the free accounts you need (a minute or two) than it does to create your first realistic-looking deepfake video.”
Here's a great video of KnowBe4’s Chief Human Risk Strategist Perry Carpenter intermingling his own image with that of a famous celebrity.
Scammers are now fully utilizing AI-enabled deepfake tools. iProov, a deepfake research company, found over 60 separate deepfake groups dedicated to creating “synthetic images.” One group had over 114,000 members. They found over 100 “face swap repositories.”
Here are recent reports of AI-enabled phishing kits:
- https://www.msspalert.com/brief/mounting-phishing-attacks-enabled-by-ai-deepfakes
- https://thehackernews.com/2024/07/spanish-hackers-bundle-phishing-kits.html
- https://www.msspalert.com/analysis/ai-now-a-staple-in-phishing-kits-sold-to-hackers
These kits are being used in the real world by scammers. Here are some examples of AI-enabled deepfakes being used in real life scams:
- https://www.group-ib.com/blog/gxc-team-unmasked/
- https://blog.lastpass.com/posts/2024/04/attempted-audio-deepfake-call-targets-lastpass-employee
- https://www.scmagazine.com/news/deepfake-video-conference-convinces-employee-to-send-25m-to-scammers
Malware already exists that steals people’s faces and then uses it to spoof those people’s identity to banks which require facial recognition to transfer large sums of money. Read more here.
It is becoming so bad that Gartner predicts that 30 percent of organizations will not trust single-factor biometric authentication solutions by next year. My question is, what is up with the other 70 percent?
In the new NIST Digital Identity Guidelines, the U.S. government says any use of biometric authentication must be paired with a physical authentication token (e.g., YubiKey). I think that really says that the physical authentication token is the real trusted authenticator here, as it is allowed and accepted by the U.S. government by itself.
Brad Pitt Scam
Naturally, romance scammers are using AI-enabled deepfakes to pretend to be celebrities. A frequent celebrity used by scammers is Brad Pitt. One woman was encouraged to divorce her current real-life husband and then used $850K of the divorce proceeds to send to the Brad Pitt scammer.
Below are some of the involved images the victim received in the scam. Apparently, Brad Pitt was not well. That is great for generating empathy and more solicitations for money.
Lots of AI-savvy people have pointed out how quickly they can personally determine that these images are fake, but most victims are not AI experts. The fake Brad Pitt also sent the victim many love poems, no doubt generated by AI. The victim stated that the fake Brad Pitt really knew how to talk to women. It is probably more factual to say that the AI the scammer used really knew how to talk to women.
I have seen a bunch of recent demos, including the one by Perry Carpenter above, where the AI is sufficiently enabled to allow near real-time of video responses to a victim’s questions. It is simply a matter of a few months until these types of real-time AI-enabled services are available to anyone, including scammers.
I know most of us would never fall for a celebrity scam. There is just no way we are going to believe that Brad Pitt or any other celebrity is in love with us AND also needs our money. But everyone is susceptible to some sort of scam, either due to timing, circumstances, or content. We all can be scammed.
What Can You Do – Defenses
Just as we were taught that we could no longer trust an email to be completely truthful, and that caution moved to SMS messages, voice calls, social media, chat apps like WhatsApp, and even real-life meetings, so, too, does this now apply to any unexpected audio, picture, or video that we received.
AI-generated or not, if the message is unexpected and asks you to do something you have never done before (at least for that sender), you should probably confirm it using some other method before performing the requested action or reacting too emotionally. See the graphical representation of those points below.
If I had only one minute to teach everyone how to best detect malicious scamming messages now and in the future, this is it: If the contact is unexpected and asking you to do something you have never done before (at least for that requestor), STOP and THINK before you react. It will not work for every scam, but it works for the bulk of them.
Train yourself that way. Train your family that way. Train your employees that way. How well you teach this and how well your employees learn and practice this skill will likely determine if your organization is or is not successfully hacked in a given time period.
This advice applies to any social engineering scam, AI-enabled or not.
Some people say the way to defeat AI-enabled deepfakes is to use tools that detect content that is AI-enabled. Just one problem with that defensive strategy. I like to follow Perry Carpenter’s advice from his recent best-selling AI book, FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions. Perry summarizes the main AI problem this way:
Nearly every tool and service we use is going to be AI-enabled and assist us in some way. Our social media channels are going to help us create AI-assisted better versions of ourselves, with better text, audio, pictures and video. Every audio, picture, and video tool is using or going to use AI to make better output which we all will happily use. They already are. To ask a deception-detecting tool if something is AI-generated or not does not make sense in a world where many, many legitimate things that we are all going to use are AI-assisted or AI-generated.
Note: You should buy Perry’s FAIK book.
Is that audio, picture, or video AI-generated? Yes! There you have it. I have already told you how any AI-detection tool will respond to nearly all future-generated audio, video, and images.
The primary question you have to ask yourself is if what you are being told/shown is trying to be maliciously deceptive with an agenda in some way.
Whether the content is real or AI is not as important as if it is trying to maliciously deceive you. Focus on the content…not whether an image looks a little fake or has blurred fingers.
So, lessen your “Is that AI?” radar and strengthen your “Is that BS?” radar.
Focus on the message. Someone trying to scam you still needs to communicate the scam to you. It is just a matter of how they communicate the scam…is it in email, social media, or an AI-enabled deepfake?
Closing
The era of easy deepfakes is here…has been here…and is just going to get easier and more common. But we humans are a resilient bunch. We are not just going to sit there and get scammed over and over again without reacting. All our cyber defense tools will be AI-enabled and be able to better protect us against AI-enabled (and real) scams.
We just need to treat all audio, images and video like we do emails and text messages, today. Focus on the content of the message, because if I am trying to scam you, the message or content will be malicious in some way, and that does not change just because it looks like me or a hybrid version of me. I still have to ask you to send me your password, send that money somewhere, or do something that is harmful to your own interests.
If you want more assistance to help your co-workers spot deception, get Perry Carpenter’s book.
And if you are a KnowBe4 customer, use our videos and other content to educate yourself and your co-workers.