I am not scared of AI. What I mean is that I do not think AI is going to kill humanity Terminator-style. I think AI is going to be responsible for more cybercrime and more realistic phishing messages, but it is already pretty bad. Social engineering, without AI, is already involved in 70% - 90% of successful cyber attacks.
I think AI will make the problem worse, if that is possible, but how much worse I nor anyone else knows. My hope is that while AI does make social engineering and phishing worse, that good actors’ AI-driven defense will offset the AI advances by the bad actors so that the overall damage is not that much worse. After all, the good actors invented AI and they use AI more than anyone else. There is not a cybersecurity vendor who is not utilizing AI to improve their product’s accuracy. AI is not just something used by bad actors. And it is not a zero-sum game.
Yes, AI will improve the ability of some social engineering and phishing campaigns to be more successful. The days of the misspelled, wrong-language, weirdly constructed phishing messages are likely coming to an end. Anyone, even those not at all familiar with the target victim’s language, can deliver a very legitimate-looking social engineering message.
AI will distinctly improve deepfakes, delivering more realistic-looking images and voices. I even read an article saying AI could fake all our signatures and handwriting. Thankfully, signatures and handwriting are pretty much already artifacts of a bygone era.
Here Is What Really Scares Me About AI
The fact that AI can craft really realistic-looking social engineering messages does not shock or scare me. Perhaps that is because we have all been expecting it. Seeing AI craft a phishing message is interesting and produces “woos” from the crowd. But the part that scares me, and most audiences, is when you see AI interact and correspond in conversations with very realistic-looking responses.
A large percentage of potential phishing victims see the phishing message lure and then ask for more information. For example, if a victim is being asked to pay an invoice, they might ask for more information about why the vendor is not on their allowed vendor list. If the phisher sends a (boobytrapped) document, the receiver might inquire what the document includes. If the phisher is asking the victim to update payment instructions to a new bank, the victim might ask what happened with the old bank, etc. Many potential phishing victims respond to the initial phishing message with additional questions. And in general, phishers are not great at crafting realistic responses, especially if their primary language is not the primary language of the targeted victim.
KnowBe4 has been testing AI-driven phishing abilities for years, to keep up with and predict what our adversaries will likely be doing in the future. We have multiple teams dedicated to this type of research. One of our common demos is to show how easily and well AI can respond to a potential victim’s questions. This is the part of AI that actually scares me. AI-generated responses take only seconds and what they respond with is so realistic-looking that I do not know how a potential victim could tell the difference between the AI response and a response from a legitimate sender.
Every industry has its own culture, terms, and professional vernacular. For example, hotels and hospitals frequently discuss “census”, meaning how many rooms or beds are occupied. Oil drillers might talk about well heads. Billing will talk about A/P and A/R, and so on. Anyone with any time in a particular field will learn the common terms and ways of talking about business in their industry.
The average human phisher, even if they target a particular industry, does not understand the culture and vernacular of the industry professional they are targeting. Usually, their phishing messages do not target any industry. Their social engineering messages are very general, something that could apply to all people and all industries.
But AI is allowing social engineers to craft industry-specific messages using industry vernacular with rising frequency. And if a potential victim asks a question, the phisher can respond with a very legitimate-looking response. All the attacker has to do is input the victim’s question into the AI and a few seconds later, the AI spits out a very realistic-looking response. This is very likely to trick more potential victims into becoming exploited victims.
When we have conducted a demo on this capability to employees and customers, it never fails to silence the room. Here's an example demo presentation from KB4-CON 2023 to see what I'm talking about:
There are not any woos. It is usually just met with gasps and silence. I am not scared of AI…usually…but this new capability is what I worry about. Social engineering and phishing are already pretty bad. But whatever the “conversion rate” is of potential victims becoming exploited victims, AI is likely to increase that percentage.
I have comfort in the fact that KnowBe4, and every other cybersecurity vendor, is using AI-enabled technologies to help better protect customers. We have been using AI to help deliver more valuable training for over five years. Our AI-driven features can help to deliver more targeted training and testing to those who need it. Our AI-driven simulated phishing campaigns are able to reach and teach more users with the content they need than human-selected campaigns. We also use AI in our more advanced capability products, like PhishER Plus, which help to better identify phishing messages.
There are a lot of reasons to be concerned about AI-driven cybercrime. We do not yet know how much worse AI will make social engineering and phishing other than it will absolutely make things worse. AI’s ability to interact with potential phishing victims in a legitimate-looking way concerns me even more than deepfakes. But rest assured that KnowBe4, and others, are not resting and waiting for the AI-driven attacks to happen. We are proactively researching and preparing for the more advanced attacks to come in the future. We are confident that we will be able to provide our customers with a strong, proactive defense against AI-driven cybercrime.