CyberheistNews Vol 16 #01 AI & Cybersecurity in 2026: Top 10 Predictions for Threats and Defenses

KnowBe4 Team | Jan 6, 2026
Cyberheist News

CyberheistNews Vol 16 #01  |   January 6th, 2026

AI & Cybersecurity in 2026: Top 10 Predictions for Threats and Defenses

As we head into 2026, artificial intelligence looms as both innovator and instigator in cybersecurity. From small businesses to global enterprises, orgs will grapple with AI-driven threats even as they leverage AI for defense.

The following ten predictions, five emerging threats followed by five defensive advancements, chart a visionary (and cautionary) path for the year ahead. Each prediction highlights the role of large language models (LLMs) and autonomous agents, showing real-world implications and steps cybersecurity leaders should consider. Buckle up: the AI era's challenges and solutions are coming fast.

THREATS:

  1. AI-Powered Phishing Becomes Indistinguishable from Humans
    LLMs will enable mass-produced, perfectly personalized phishing emails that mimic tone, context and writing style with frightening accuracy. Even savvy users will be fooled unless they get trained with deepfake simulations.

  2. Deepfakes Trigger a Crisis of Trust
    Synthetic voices and videos will impersonate executives and vendors, authorizing wire transfers or leaking false info. Seeing and hearing will no longer be believing. Orgs must train their employees for out-of-band verification.

  3. Autonomous Malware Learns and Adapts on the Fly
    Malware will embed AI to mutate code, evade detection and adapt to its environment mid-attack. Traditional antivirus is toast. Only behavior-based, adaptive defenses will hold the line, also in email.

  4. Prompt Injection and AI Hijacking Go Mainstream
    Attackers will target the AI itself, tricking LLM-powered agents into leaking data, making bad decisions or executing harmful actions. Securing your AI systems will be as critical as protecting your endpoints.

  5. AI Lowers the Barrier for Cybercrime-as-a-Service
    Anyone with access to underground LLM tools can launch full-spectrum cyber attacks, including phishing, malware and social engineering. No technical expertise required. Expect a flood of semi-skilled attackers moving fast and wide.

DEFENSES
  1. AI-Driven Threat Detection Becomes Table Stakes
    Security platforms will rely on machine learning to spot anomalies, correlate subtle signals and catch attacks in real time. SMBs and enterprises alike must adopt AI-powered monitoring or risk being outpaced.

  2. AI Joins the Red Team
    Autonomous agents will simulate attacks that mimic real AI-driven adversaries, helping organizations harden defenses before real breaches occur. Pen tests will shift from annual exercises to continuous, AI-driven stress tests.

  3. Autonomous SOCs Take Shape
    Tier-one alert triage, correlation and even containment will be handled by AI agents acting as 24/7 security analysts. Human responders will move up the stack, managing playbooks instead of alerts.

  4. LLM Co-Pilots Boost Security Teams' Output
    AI assistants will draft incident reports, analyze logs and help analysts make faster decisions. Even small security teams can operate like seasoned pros with the right co-pilot in place.

  5. Zero-Trust Evolves to Fight Deepfakes and AI Spoofing
    Trust is redefined—every identity, message or system interaction must be verified by design. Behavioral biometrics, digital watermarks and strict out-of-band confirmations become non-negotiable.

[Live Demo] Ridiculously Easy AI-Powered Security Awareness Training and Phishing

Phishing and social engineering remain the #1 cyber threat to your org, with 68% of data breaches caused by human error. Your security team needs an easy way to deliver personalized training—this is precisely what our AI Defense Agents provide.

Join us for a demo showcasing KnowBe4's leading-edge approach to human risk management with agentic AI that delivers personalized, relevant and adaptive security awareness training with minimal admin effort.

See how easy it is to train and phish your users with KnowBe4' HRM+ platform:

  • NEW! Deepfake Training Content - Generate hyperrealistic deepfakes of your own executives to prepare users to spot AI-driven manipulation and deepfakes
  • SmartRisk Agent™ - Generate actionable data and metrics to help you lower your organization's human risk score
  • Template Generator Agent - Create convincing phishing simulations, including Callback Phishing, that mimic real threats. The Recommended Landing Pages Agent then suggests appropriate landing pages based on AI-generated templates
  • Automated Training Agent - Automatically identify high-risk users and assign personalized training
  • Knowledge Refresher Agent and Policy Quizzes Agent - Reinforce your security program and organizational policies

See how these powerful AI-driven features work together to dramatically reduce your organization's risk while saving your team valuable time.

Date/Time: TOMORROW, Wednesday, January 7 @ 2:00 PM (ET)

Save My Spot:
https://info.knowbe4.com/kmsat-demo-1?partnerref=CHN2

Expectations for AI in Business in 2026

Seven expected general developments for this new year. Get ready:

Agent-to-agent communications and commerce:
Businesses must solve for consumers using agents to gather information, engage with brands and make purchases. This may alter how we design user experiences on the web and in apps, and it could rapidly evolve marketing, sales and customer experience strategies.

More organizations move from the Piloting AI phase to the Scaling AI phase:
The Piloting AI phase is defined by prioritizing and running a limited number of pilot projects with narrowly-defined use cases. While many organizations and departments still find themselves here, an increasing number of businesses are entering the Scaling AI phase, which is characterized by AI being infused into every aspect of the organization (marketing, sales, service, operations, product, HR, finance, legal) to create competitive advantages, accelerate growth and drive innovation.

Adoption of reasoning models and capabilities:
Reasoning gives AI models the abilities to build plans, think logically, analyze situations, evaluate evidence and solve problems. As more professionals and business leaders understand and apply these capabilities (both understanding and adoption remain very low in enterprises to date), the future of work will begin to transform more rapidly.

Investments in AI literacy:
As the capability gap widens, organizations are recognizing that technology alone isn't a silver bullet. Massive investments are being made into education and training programs to drive AI literacy. We define AI literacy as, "the knowledge, skills, behaviors and mindset needed to drive human-centered AI transformation."

Shift from AI-driven optimization to AI-driven innovation:
While initial AI adoption in organizations has focused on cutting costs and streamlining existing processes, the next wave is about creation of value. Optimization is using AI to do the same things better, faster or cheaper. Innovation is using AI to do new things that create new forms of value for customers and the organization. Optimization is 10% thinking. Innovation is 10x thinking.

Custom evals tied to economically valuable work:
Standard AI model eval benchmarks are no longer sufficient for the enterprise. Businesses will increasingly build custom evaluation frameworks that measure an AI's performance against specific business KPIs, tasks and workflows rather than academic IQ tests.

AI becomes a default layer in every software workflow:
AI is shifting from a standalone tool to a capability layer embedded across the business software stack. AI models are being infused into marketing solutions, CRMs, ERPs, analytics, HR systems and service platforms. We are entering the era of "omni intelligence" in which AI is integrated into every part of our professional lives.

With grateful acknowledgments to The Artificial Intelligence Show podcast, which I warmly recommend so you stay up to date. One of my fave pods!:
https://podcast.smarterx.ai/

NEW Deepfake Training: Empowering Your Users to Recognize What AI Can Fake

Your users are being targeted right now. Deepfake attacks happen every few minutes, and nearly half of all organizations have already been hit. When a deepfake lands in your user's inbox, will they spot it or fall for it?

In this session, Perry Carpenter, Chief Human Risk Management Strategist, and Chris Littlefield, Product Manager, pull back the curtain on the next era of social engineering. Deepfakes, AI agents and synthetic narratives are reshaping the threat landscape and traditional training no longer prepares users for attacks that feel real.

You'll learn how to build a workforce that stays calm, curious and grounded in truth, even when a scam sounds exactly like someone they trust.

You'll explore:

  • How attackers use plausibility, framing and myth-direction to make AI-generated impersonations feel instantly legitimate
  • Recent deepfake and voice-clone incidents that expose where human judgment faltered—and how better cognitive defenses would have changed the outcome
  • Training methods that build narrative awareness and emotional self-regulation, preventing both overreaction and paralysis
  • Practical verifications your employees can practice to recognize a fake even when an email sounds right, a voice sounds familiar or a video "looks close enough"
  • NEW! KnowBe4's Deepfake Training Content shows how to create a custom deepfake training experience featuring your own leaders to transform abstract risk into unforgettable learning moments

You'll leave the webinar with the strategy and tools to help employees recognize and validate AI-driven manipulation, plus measurable ways to demonstrate to leadership how you can reduce real-world deepfake risks.

Date/Time: Wednesday, January 14 @ 2:00 PM (ET)

Can't attend live? No worries — register now and you will receive a link to view the presentation on-demand afterwards.

Save My Spot:
https://info.knowbe4.com/new-deepfake-training-na?partnerref=CHN

Most Parked Domains Lead Users to Scams or Malware

Over 90% of parked domains now direct users to malicious content, compared to less than 5% a decade ago, according to researchers at Infoblox.

"Parking threats are fueled by lookalike domains," Infoblox explained. "No domain is immune. When one of our researchers tried to report a crime to the FBI's Internet Crime Complaint Center (IC3), they accidentally visited ic3[.]org instead of ic3[.]gov.

"Their phone was quickly redirected to a false "Drive Subscription Expired" page. They were lucky to receive a scam; based on what we've learnt, they could just as easily receive an information stealer or trojan malware. The real threat from parked domains comes from their ability to hide malicious activity."

The parked domains themselves may not be malicious, but many of them are involved in complex advertising networks that eventually redirect users to scams, scareware or malware downloads.

"At the heart of the matter is a feature referred to as direct search or zero click parking, which is intended to directly deliver users relevant content based on the parked domain name," the researchers explain.

"When a domain owner opts into direct search, traffic to the domain is sold to advertisers who bid on keywords and traffic characteristics. In practice, the site visitor is usually funneled through a series of traffic distribution systems (TDSs) operated by third-party advertising platforms, creating a complex web where a legitimate business model is weaponized for abuse."

This complexity makes it difficult for technical defenses to prevent users from ending up on malicious sites. "[T]here is no clear path to effectively report abuse in the parking ecosystem," Infoblox says. "Reputable parking platforms gather KYC information on their direct customers, but the threat to internet users and enterprises is generally out of their purview.

"Moreover, the anti-fraud mechanisms these companies use inadvertently protect the bad advertisers from detection as well. Finally, an unintended consequence of Google's advertising policy changes may be to exacerbate the threat by causing domain holders to increasingly adopt direct search."

Blog post with links:
https://blog.knowbe4.com/most-parked-domains-lead-users-to-scams-or-malware

Identify Weak User Passwords In Your Organization With the Newly Enhanced Weak Password Test

Cybercriminals never stop looking for ways to hack into your network, but if your users' passwords can be guessed, they've made the bad actors' jobs that much easier.

Verizon's Data Breach Investigations Report showed that 81% of hacking-related breaches use either stolen or weak passwords. The Weak Password Test (WPT) is a free tool to help IT administrators know which users have passwords that are easily guessed or susceptible to brute force attacks, allowing them to take action toward protecting their organization.

Weak Password Test checks the Active Directory for several types of weak password-related threats and generates a report of users with weak passwords.

Here's how Weak Password Test works:

  • Connects to Active Directory to retrieve password table
  • Tests against 10 types of weak password related threats
  • Displays which users failed and why
  • Does not display or store the actual passwords
  • Just download, install and run. Results in a few minutes!

Don't let weak passwords be the downfall of your network security. Take advantage of KnowBe4's Weak Password Test and gain invaluable insights into the strength of your password protocols.

Download Now:
https://info.knowbe4.com/weak-password-test-chn


Happy New Year! And let's stay safe out there.

Warm regards,

Stu Sjouwerman, SACP
Executive Chairman
KnowBe4, Inc.

PS: Here’s my latest article as an official member of Forbes Technology Council:
https://www.forbes.com/councils/forbestechcouncil/2026/01/02/7-market-research-trends-to-watch-for-in-2026/

PPS: [EYE OPENER] Charted: How Global Economic Power Shifted (1980–2025):
https://www.visualcapitalist.com/charted-how-global-economic-power-shifted-1980-2025/?

Quotes of the Week  
2025 was an exciting and mildly surprising year of LLMs. LLMs are emerging as a new kind of intelligence, simultaneously a lot smarter than I expected and a lot dumber than I expected."

"Jagged Intelligence. The word I came up with to describe the (strange, unintuitive) fact that state of the art LLMs can both perform extremely impressive tasks (e.g. solve complex math problems) while simultaneously struggle with some very dumb problems."

- Andrej Karpaty, (born 1986, OpenAI co-founder)

Thanks for reading CyberheistNews

You can read CyberheistNews online at our Blog
https://blog.knowbe4.com/cyberheistnews-vol-16-01-ai-cybersecurity-in-2026-top-10-predictions-for-threats-and-defenses

Security News

Amazon Warns of Fraudulent North Korean Job Applicants

Amazon has blocked more than 1,800 suspected North Korean applicants from joining the company since April 2024, TechRadar reports. Amazon's Chief Security Officer, Stephen Schmidt, said in a LinkedIn post that DPRK-linked applications have increased by 27% quarter over quarter this year.

"Their LinkedIn strategies are getting sophisticated," Schmidt wrote. "We're seeing them hijack dormant accounts through compromised credentials to gain verification. We've also identified networks where people hand over access to their accounts in exchange for payment."

Schmidt said Amazon has observed the following indicators associated with DPRK applicants:

  • "They're increasingly targeting AI and machine learning roles, likely because these are in higher demand as companies adopt AI.
  • These operatives often work with facilitators managing "laptop farms": U.S. locations that receive shipments and maintain domestic presence, while the worker operates remotely from outside the country.
  • Educational backgrounds keep changing. We've watched the strategy shift from East Asian universities, to institutions in no-income-tax states, to now California and New York schools. We look for degrees from schools that don't offer claimed majors, or dates misaligned with academic schedules."

Schmidt added, "This isn't Amazon-specific. This is likely happening at scale across the industry." These fraudulent job applicants use social engineering to obtain remote employment at foreign companies, then transfer their salaries to the North Korean government.

TechRadar cites a recent report from Microsoft that found that hundreds of U.S. companies, including many Fortune 500 firms, have unknowingly hired these workers. AI-powered security awareness training gives your organization an essential layer of defense against social engineering attacks.

TechRadar has the story:
https://www.techradar.com/pro/security/amazon-is-being-reportedly-deluged-with-fake-north-korean-job-applicants

New ConsentFix Technique Tricks Users Into Handing Over OAuth Tokens

Researchers at Push Security have observed a new variant of the ClickFix attack that combines "OAuth consent phishing with a ClickFix-style user prompt that leads to account compromise."

The technique, which the researchers call "ConsentFix," tricks victims into copying and pasting a localhost URL containing an authorization token, then pasting it into a phishing page.

"Authorization code flow is an OAuth 2.0 protocol for web applications to get a user's permission to access protected resources," the researchers explain.

"When using the authorization code flow to connect an app, it combines the code with an OAuth secret held by the app in exchange for a token (the valuable part). However, some apps can't protect a secret — for example, apps that run on your mobile device or desktop.

"In this case, the code alone is enough to generate an OAuth token, without the secret — which is what is being exploited here."

In the attacks observed by Push Security, the threat actors abused the Azure CLI OAuth app to target Microsoft accounts. "Essentially, the attacker tricks the victim into logging into Azure CLI, by generating an OAuth authorization code — visible in a localhost URL — and then pasting that URL (including the code) into an attacker-controlled page," the researchers write.

"This then creates an OAuth connection between the victim's Microsoft account and the attacker's Azure CLI instance." Push Security points out that these attacks are very difficult to block, since they rely on legitimate tools and social engineering tactics:

  • "The attack happens entirely inside the browser context, removing one of the key detection opportunities for ClickFix (because it doesn't touch the endpoint).
  • Delivering the lure via a Google Search watering hole attack completely circumvents email-based anti-phishing controls.
  • Targeting a first-party app like Azure CLI means that many of the mitigating controls available for third-party app integrations do not apply — making this attack way harder to prevent.
  • Because there's no login required, phishing-resistant authentication controls like passkeys have no impact on this attack."

Over 70,000 organizations worldwide trust the KnowBe4 HRM+ platform to strengthen their security culture and reduce human risk.

Blog post with links:
https://blog.knowbe4.com/new-consentfix-technique-tricks-users-into-handing-over-oauth-tokens

What KnowBe4 Customers Say

"Hi Bryan, Thank you very much for your message, and I can confirm that we are using your platform with good results here. Kudos to Damian C., your Customer Implementation Specialist, who did a great job setting up our environment."

- D.P., IT Director


"Hi Bryan, Everything has been great so far. Yeffry and our onboarding team has done a great job getting us back on the platform, and Lauren on the support team did a fantastic job helping me out with a lingering issue that was the result of some of our previous KnowBe4 experience. I’m very pleased with things so far and we’re just getting started. I can’t wait to add in some of the newer features of the platform here soon!"

- A.K. Vice President, IT and Systems

The 10 Interesting News Items This Week
  1. Georgia arrests ex-spy chief over alleged protection of scam call centers:
    https://therecord.media/republic-of-georgia-former-spy-chief-arrested-scam-centers

  2. New Spear-Phishing Attack Targeting Security Individuals in the Israel Region:
    https://gbhackers.com/spear-phishing-2/

  3. CISOs are managing risk in survival mode:
    https://www.helpnetsecurity.com/2025/12/29/ciso-risk-management/

  4. Fraud Leaders Warn of Deepfakes, Stablecoin Risks Ahead:
    https://www.bankinfosecurity.com/fraud-leaders-warn-deepfakes-stablecoin-risks-ahead-a-30407

  5. U.S. shuts down phisherfolk's $14.6M password-hoarding platform:
    https://www.theregister.com/2025/12/24/us_shutters_phishermens_146m_passwordhording/

  6. Coupang to Issue $1.17 Billion(!) in Vouchers Over Data Breach:
    https://www.securityweek.com/coupang-to-issue-1-17-billion-in-vouchers-over-data-breach/

  7. European Space Agency Confirms Breach After Hacker Offers to Sell Data:
    https://www.securityweek.com/european-space-agency-confirms-breach-after-hacker-offers-to-sell-data/

  8. U.S. cybersecurity experts plead guilty for ransomware attacks, face 20 years in prison each — group demanded up to $10 million from each victim:
    https://www.tomshardware.com/tech-industry/cyber-security/u-s-cybersecurity-experts-plead-guilty-for-ransomware-attacks-face-20-years-in-prison-each-group-demanded-up-to-usd10-million-from-each-victim

  9. FBI warns of virtual kidnapping scams using altered social media photos:
    https://www.ic3.gov/PSA/2025/PSA251205

  10. Spearphishing Campaign Abuses the package manager for the JavaScript programming language - npm Registry:
    https://socket.dev/blog/spearphishing-campaign-abuses-npm-registry#new_tab

Cyberheist 'Fave' Links
This Week's Links We Like, Tips, Hints and Fun Stuff

Topics: Cybercrime, KnowBe4



Subscribe to Our Blog


Gartner Magic Quadrant




Get the latest insights, trends and security news. Subscribe to CyberheistNews.