AI in 2024: The Top 10 Cutting Edge Social Engineering Threats

Stu Sjouwerman | Dec 29, 2023

ai-in-2014The year 2024 is shaping up to be a pivotal moment in the evolution of artificial intelligence (AI), particularly in the realm of social engineering. As AI capabilities grow exponentially, so too do the opportunities for bad actors to harness these advancements for more sophisticated and potentially damaging social engineering attacks. Let's explore the top 10 expected AI developments of 2024 and their implications for cybersecurity.

1. Exponential Growth in AI Reasoning and Capabilities

As Vinod Khosla of Khosla Ventures points out, "The level of innovation in AI is hard for many to imagine." AI's reasoning abilities are expected to soar, potentially outperforming human intelligence in certain areas. This could lead to more convincing and adaptable AI-driven social engineering tactics, as bad actors leverage these reasoning capabilities to craft more persuasive attacks.

2. Multimodal Large Language Models (MLLMs)

MLLMs, capable of processing and understanding various data types, will take center stage in 2024. These models could be used to create highly convincing and contextually relevant phishing messages or fake social media profiles, enhancing the efficacy of social engineering attacks.

3. Text to Video (T2V) Technology

The advancement in T2V technology means that AI-generated videos could become a new frontier for misinformation and deepfakes. This could have significant implications for fake news and propaganda, especially in the context of the 2024 elections, or real-time Business Email Compromise attacks.

4. Revolution in AI-Driven Learning

AI's ability to identify knowledge gaps and enhance learning can be exploited to manipulate or mislead through tailored disinformation campaigns, targeting individuals based on their learning patterns or perceived weaknesses.

5. Challenges in AI Regulation

Governments' attempts to regulate AI to prevent catastrophic risks will be a key area of focus. However, the speed of AI innovation will outpace regulatory efforts, leading to a period where advanced AI technologies inevitably will be used in social engineering attacks.

6. The AI Investment Bubble and Startup Failures

The surge in AI venture capital indicates a booming market, but the potential failure of AI startups due to business model obsolescence could lead to a set of orphaned, advanced, but unsecured AI tools available for malicious use.

7. AI-Generated Disinformation in Elections

With major elections scheduled globally, the threat of AI-generated disinformation campaigns is more significant than ever. These sophisticated AI tools are already being used to sway public opinion or create political unrest, and there are  40 global elections in 2024.

8. AI Technology Available For Script Kiddies

As AI becomes more accessible and cost-effective, the likelihood of advanced AI tools falling into the wrong hands increases. This could lead to a rise in AI-powered social engineering attacks by less technically skilled bad actors.

9. Enhanced AI Hardware Capabilities

Jensen Huang, co-founder and chief executive of Nvidia, told the DealBook Summit in late November that “there’s a whole bunch of things that we can’t do yet.” However, the advancements in AI hardware, such as Neural Processing Units (NPU), will lead to faster and more sophisticated AI models. This could allow real-time, adaptive social engineering tactics, making scams more convincing and harder to detect.

10. AI in Cybersecurity and the Arms Race

While AI advancements provide tools for cybercriminals, they also empower more effective AI-driven security systems. This is leading to an escalating arms race between cybersecurity measures and attackers' tactics, where real-time AI monitoring against AI-driven social engineering attacks might become reality.

The upshot is that 2024 is set to be a landmark year in AI development, with far-reaching implications for social engineering. It is crucial for cybersecurity professionals to stay ahead of these trends, build a strong security culture, adapt their strategies, and keep stepping their workforce through frequent security awareness and compliance training.

Is Your Organization Vulnerable to Quishing?

Traditional filters often miss malicious links hidden in QR codes. Launch our Free Quishing Test for up to 100 users to identify security gaps and receive your custom Phish-prone Percentage report within 24 hours.

Get Your Free Quishing Test

Secure the Digital Workforce: Human + AI

KnowBe4 empowers the modern workforce to make smarter security decisions every day. Trusted by more than 70,000 organizations worldwide, KnowBe4 is the pioneer of digital workforce security, securing both AI agents and humans. The KnowBe4 Platform provides attack simulation and training, collaboration security, and agent security powered by AIDA (Artificial Intelligence Defense Agents) and a proprietary Risk Score. The platform leverages 15 years of behavioral data to combat advanced threats including social engineering, prompt injection, and shadow AI. By securing humans and agents, KnowBe4 leads the industry in workforce trust and defense.