The Current State of Cybersecurity Should Fear AI Tools Like ChatGPT



ChatGPT Social EngineeringMalicious use of the text-based AI has already begun to be seen in the wild, and speculative ways attackers can use ChatGPT may spell temporary doom for cybersecurity solutions.

There are two common ways that, over the years, really good cybercriminal gangs have been able to avoid detection and see their efforts result in a successful attack: obfuscation and credibility. Obfuscation speaks to anything from continually modifying malware code to avoid being detected by a security solution to using a lookalike domain name to fool a user who isn’t paying attention when opening an email. Credibility is usually found in phishing and social engineering attacks through really convincing spoofed webpages, persuasive emails, and the impersonation of legitimate brands.

Enter in ChatGPT.

Unless you’ve been hiding under a rock in the last month or so, you already know about the power of this open AI and have probably played with it yourself. There are a few ways we’re already seeing ChatGPT be misused to the benefit of the cybercriminal.

  • Say Goodbye to Poorly-Written Malicious Content – One of the telltale signs of phishing emails is the really bad grammar, misspellings, and just Go ahead and ask ChatGPT to “write an email explaining how the recipient’s company is overdue in paying an invoice” and you’ll get yourself a grammatically-correct, educated-sounding even more convincing email. Given the source of these poorly-written emails is that the threat actors behind it aren’t native English speakers, the use of ChatGPT instantly makes them experts in sounding credible! So anything from emails, to chat responses, to any kind of interaction mid-attack will now sound professional.
  • ChatGPT-Based Obfuscating Malware! – Yes, you read that right. Security researchers at CyberArk have determined that the AI engine can be easily used to create polymorphic malware. Using some specific wording to bypass built-in constraints, the researchers were able to create malware code using the ChatGPT API and encode it in base64. There’s even some discussion of the concept of having the malware talk to ChatGPT mid-attack to establish a unique evasion technique. This is truly scary stuff should it be possible in an actual attack.

I’m not saying that ChatGPT is malicious at all. In fact, its creators have taken steps to put constraints in place to disallow this. But given its newly-released nature, many have found ways around these.

What these use cases for AI bring to light is how the face of attacks is going to change from here on out. As cybercriminals take advantage of AI tools (and it’s not too far a stretch of the imagination to see them build their own that doesn’t have the typical constraints against all things malicious), we should expect to see a jump ahead by the cybercriminals, with security vendors responding in kind through AI (much in the same way there are already tools for schools to identify AI-written essays) to detect malicious content and code in the future.


Request A Demo: Security Awareness Training

products-KB4SAT6-2-1New-school Security Awareness Training is critical to enabling you and your IT staff to connect with users and help them make the right security decisions all of the time. This isn't a one and done deal, continuous training and simulated phishing are both needed to mobilize users as your last line of defense. Request your one-on-one demo of KnowBe4's security awareness training and simulated phishing platform and see how easy it can be!

Request a Demo!

PS: Don't like to click on redirected buttons? Cut & Paste this link in your browser:

https://www.knowbe4.com/kmsat-security-awareness-training-demo



Subscribe to Our Blog


Comprehensive Anti-Phishing Guide




Get the latest about social engineering

Subscribe to CyberheistNews