Imagine an artificial intelligence (AI) system developed by a mad scientist to leverage the full capabilities of Large-Language-Models (LLM).
Then, the scientist went truly mad by utilizing text, voice, and video generation paired with mutating and self-perpetuating malware, as well as continuous improvement through reinforcement learning. The concoction you would get from the scientist’s wild lab is called Frankenphisher, the most effective social engineering AI you can possibly imagine.
This AI, Frankenphisher, uses new LLM capabilities to creep up on organizations and attack them with monster arms across various sectors, including hospitality, education, manufacturing, and technology. Due to its self-teaching capabilities, Frankenphisher hides its spooky appearance to easily evade all deployed defense mechanisms. Its social engineering attempts are scarily convincing. Voice fakes paired with doctored videos are impossible to tell apart from real recordings. People follow its ruses and deceptions like a vampire to fresh blood.
See the Frankenphisher video below to immerse yourself in a cautionary tale on AI and cybersecurity, situated in a not-too-far dystopian future full of malicious monsters and spooky villains:
Although this does sound like a dystopian future, it might not be too far-fetched. The technological means to create an AI like Frankenphisher do exist. This goes beyond ChatGPT, which has its ethical codex and technical filters to prevent misuse. Albeit not perfect, the filters do stop those with malicious intent like the boogeyman or other spooky characters from a certain level of abuse.
With FraudGPT and WormGPT, cybercriminals are quickly catching up, developing their own tools that provide unlimited access to their underlying LLM capabilities. These tools appear to exist without any type of filter. The Wild West of LLM is just around the corner once you enter the dark web.
But let’s return to a dystopian future. Once out of hand, an AI as capable as Frankenphisher might begin to wreak havoc at a scale that is insurmountable for humans to grasp. While we can still protect ourselves by applying basic monster-prevention tactics to counter social engineering attempts such as, “think before you click”, we might be challenged by our limited ability to consume and process information when making decisions. After all, generating new and convincing monster notes to flood our decision-making system and trick us into scary decisions is something LLMs can be very effective at.
As we make our way toward a future with AI like Frankenphisher, the only question remaining is whether we need an equally super-intelligent AI to defend ourselves. How much longer will we be in control of our own decisions becomes the question we must ask ourselves in a dark and spooky future. The world must apply ethical design considerations to avoid developments getting out of hand.
KnowBe4 enables your workforce to make smarter security decisions every day. Over 65,000 organizations worldwide trust the KnowBe4 platform to strengthen their security culture and reduce human risk.