How Much Will AI Help Cybercriminals?



Evangelists-Roger GrimesDo not forget, AI-enabled technologies, like KnowBe4’s Artificial Intelligence Defense Agents (AIDA), will make defenses increasingly better.

I get asked a lot to comment on AI, usually from people who wonder, or are even a bit scared, about how AI can be used to hurt them and others. It is certainly a top topic everyone in the cybersecurity world is wondering about. One of the key questions I get is:

How much worse will AI make cybercrime?

The quick answer is: No one knows.

Anyone giving you an answer is taking their best guess or simply making it up. But we know it is going to get worse if the defenders do not appropriately respond. AI-enabled technologies will make every single type of cyberattack easier to perform and likely more accurate. 

AI-enabled cyberattacks are likely to get more frequent and cause more damage to more people, especially if they are unprepared. This is already happening. But that is all that we can say with confidence right now. We do not know if AI-enabled technologies will make cyberattacks a few percent worse or hundreds of times worse. We just know it is going to get worse (especially if defenders do not appropriately respond).

Right now, we are seeing some preliminary testing results that show that AI-enabled tools are going to make social engineering easier for attackers to do, even if most of them are not yet using AI-enabled technologies. The tools that most phishers use are not AI-enabled, yet. So, for example, this year, if you get compromised by a social engineering attack, it is very, very likely that the compromise did not involve an AI-enabled tool.

But their social engineering and hacking tools will increasingly become AI-enabled (assisted)…it has already started…and that AI-use will make whatever they are trying to do more accurate and successful. In a very short period of time, most/many cybercrime tools will become AI-enabled. We are already seeing the beginnings of that trend. There is a good chance that every future cyberattack will be AI-enabled or assisted in some way and will be more accurate or damaging because of it.

AI To the Rescue

At the same time, it is helpful to remember that the good actors invented AI and have been using it for far longer than the bad actors. At KnowBe4, we have been using AI for nearly six years and we are researching and enabling it in everything we do. We have been investing a lot into figuring out how AI can be used to make our customers safer.

It is no exaggeration to say that our entire company, every person, discusses, learns, and researches AI for some part of every day. Our CEO, Stu Sjouwerman, has long been beginning every day with a meeting focused on how we can improve customers’ lives using AI. Our entire company has been using AI-enabled tools for a long time. 

We look at AI from both an attacker’s perspective and a defender’s perspective. We have been testing, researching, and simulating how AI can be used to make social engineering attacks more accurate and realistic. And we are testing and researching how we can use AI to defeat all attacks, including AI-enabled attacks. We have been creating and using AI-enabled tools and technologies for a while now. It is not new to us.

At KnowBe4, all our AI-enabled technologies have been encapsulated in what we call Artificial Intelligence Defense Agents (AIDA). AIDA, version one, first appeared over five years ago. It is an AI-native platform that impacts everything we do and enables long-term culture change and human risk reduction for our customers.

We have long been using AIDA in several products (including our flagship Kevin Mitnick Security Awareness Training and PhishER), and more use cases are coming. We are adding many more AI-enabled features and products to make what we do (i.e., helping people to decrease human-based cybersecurity risk) more accurate and efficient. 

We have been allowing our customers to use AIDA as an option to pick what simulated phishing templates are sent to end users to test their behavior for many years. And from that single use case, we know that AI-enabled template selection “tricks” and educates more recipients than templates selected by human admins. How much more? Right now, the improvement is in the single digits, but we are adding far more functionality, such as allowing AI to assist in figuring out:

  • What level of difficulty of simulated phishing emails should be sent to end users
  • What types of knowledge should be sent to end users
  • What types of quizzes should be sent to end users

On a related note, others outside the industry have test cases that have shown similar single digit percentage improvements from using AI so far. For example, a 2024 political mailing campaign agency publicly shared that AI-generated emails produced a 3-4% improvement in campaign donations over emails written by humans. (Sorry, I do not have a supporting link. I just heard the review from a popular national television channel). 

It does not take too much thought to figure out that if AI makes a 3-4% increase in campaign donations, that sort of increase in “conversion rate” might apply to AI-generated phishing campaigns. And that is just right now as the AI-enabled tools are really starting to be figured out and improved.

Consider 3-4% a floor of how much worse real-world phishing could be without an appropriate AI-enabled response. So, AI-enabled phishing is going to make social engineering and phishing worse, but so far, not incredibly worse…at least not yet. The phisher’s conversion rate is likely to improve as their AI-enabled malicious tools improve. 

Luckily, the defenders are likely to improve their defenses as well, if not even more. We are very confident that our AIDA and our AI-enabled tools and technologies will make users safer. Our preliminary testing is already showing this in our data and as AI-enabled technologies become embedded in everything we do, it is just going to make defenses against all attacks better as well. 

AIDA will soon allow KnowBe4 customer administrators to type in natural language commands and queries to configure and operate their KnowBe4 products. For example, admins will be able to look for high-risk users, add them to a new group, and then create educational and simulated phishing campaigns that target them…all by typing in a few sentences.

All by simply typing in or asking. What used to take a few minutes to nearly a half hour can be easily performed in one to two minutes. This type of AI-assisted product functionality will make admins more efficient and that could translate to improved outcomes.

Attendees at our 2024 annual conference in Orlando, KB4-CON heard and saw the phases and details of our AI-enabled plans over the next few years. We are not resting. We are enabling and using AI to best fight malicious hackers and their attacks. 

When people tell you they are worried about AI-enabled attacks, remember the good actors invented it and are using it even more than the adversaries.


Request A Demo: Security Awareness Training

products-KB4SAT6-2-1New-school Security Awareness Training is critical to enabling you and your IT staff to connect with users and help them make the right security decisions all of the time. This isn't a one and done deal, continuous training and simulated phishing are both needed to mobilize users as your last line of defense. Request your one-on-one demo of KnowBe4's security awareness training and simulated phishing platform and see how easy it can be!

Request a Demo!

PS: Don't like to click on redirected buttons? Cut & Paste this link in your browser:

https://www.knowbe4.com/kmsat-security-awareness-training-demo



Subscribe to Our Blog


Comprehensive Anti-Phishing Guide




Get the latest about social engineering

Subscribe to CyberheistNews