CyberheistNews Vol 15 #39 [Watch Your Back] Why Your Security Strategy Needs a Human Upgrade Now

KnowBe4 Team | Sep 30, 2025
Cyberheist News

CyberheistNews Vol 15 #39  |   September 30th, 2025

[Watch Your Back] Why Your Security Strategy Needs a Human Upgrade Now

By Javvad Malik

Let's be brutally honest. For years, our industry has been locked in a civil war. In one camp, the technologists have been building higher walls and smarter traps, arguing that the right AI-powered, next-gen firewall will solve all our problems.

In the other camp, the behaviorists have been calling for more training and better awareness, convinced that if we just make people understand the risks, they'll stop clicking on things.

Here's the thing: they're both right, and they're both missing the point.

While we've been arguing, a massive elephant has made himself comfortable in our server rooms.

That elephant is the simple fact that our defenses are fractured. We're fighting a psychological war against AI-powered adversaries with a strategy that's split right down the middle. The result? A staggering 74% of CISOs now consider human error their number one risk.

As highlighted in our recent human risk management (HRM) whitepaper, the old ways are no longer working. The game has changed, especially with AI now turbo-charging the tricksters, making their phishing lures and social engineering scams almost indistinguishable from the real thing.

The old way of just "making people aware" with a once-a-year, tick-box training session? That's like bringing a water pistol to a lightsaber fight. It's a compliance activity, not a security strategy. It might check a box for an auditor, but it does little to stop a sophisticated attacker who knows how to play on basic human emotions like urgency, helpfulness or fear.

This creates the dangerous "Awareness-Action Gap"—the chasm between what your employees know they should do and what they actually do at 3PM on a Friday when they're tired and distracted.

It's time for a peace treaty. It's time for a strategic upgrade. It's time for HRM.

HRM isn't just another buzzword; it's a fundamental shift in how we approach security. It's a unified strategy that stops treating technology and people as separate problems and starts treating them as a single, interconnected system. It acknowledges that you can't firewall your way out of a well-crafted phishing email, and you can't train your way out of a poorly designed security process.

HRM is about treating the human element with the same analytical rigor we apply to our tech stack. It's about understanding behaviors, motivations and yes, even the occasional lapse in judgement, and then building a supportive ecosystem of both tech and culture to account for it.

[CONTINUED] Blog post with links:
https://blog.knowbe4.com/why-your-security-strategy-needs-a-human-upgrade

[Live Demo] Ridiculously Easy AI-Powered Security Awareness Training and Phishing

Phishing and social engineering remain the #1 cyber threat to your organization, with 68% of data breaches caused by human error. Your security team needs an easy way to deliver personalized training. This is precisely what our AI Defense Agents provide.

Join us for a demo showcasing KnowBe4's leading-edge approach to human risk management with agentic AI that delivers personalized, relevant and adaptive security awareness training with minimal admin effort.

See how easy it is to train and phish your users with KnowBe4's HRM+ platform:

  • SmartRisk Agent™ - Generate actionable data and metrics to help you lower your organization's human risk score
  • Template Generator Agent - Create convincing phishing simulations, including Callback Phishing, that mimic real threats. The Recommended Landing Pages Agent then suggests appropriate landing pages based on AI-generated templates
  • Automated Training Agent - Automatically identify high-risk users and assign personalized training
  • Knowledge Refresher Agent and Policy Quizzes Agent - Reinforce your security program and organizational policies.
  • Enhanced Executive Reports - Track user activities, visualize trends, download widgets, and improve searching/sorting to provide deeper insights and streamline collaboration

See how these powerful AI-driven features work together to dramatically reduce your organization's risk while saving your team valuable time.

Date/Time: TOMORROW, Wednesday, October 1 @ 2:00 PM (ET)

Save My Spot:
https://info.knowbe4.com/kmsat-demo-1?partnerref=CHN

[AGENT SECURITY] Building Agents and Keeping Them Secure

Two related topics here. First, (@God of Prompt) posted: "Google just dropped a 64-page guide on AI agents that's basically a reality check for everyone building agents right now.

The brutal truth: most agent projects will fail in production. Not because the models aren't good enough, but because nobody's doing the unsexy operational work that actually matters.

While startups are shipping agent demos and "autonomous workflows," Google is introducing AgentOps - their version of MLOps for agents. It's an admission that the current "wire up some prompts and ship it" approach is fundamentally broken.

The security section is sobering. These agents give LLMs access to internal APIs and databases. The attack surface is enormous, and most teams treat security as an afterthought.

Google's strategic bet: the current wave of agent experimentation will create demand for serious infrastructure. They're positioning as the grown-up choice when startups realize their prototypes can't scale.

The guide breaks down agent evaluation into four layers most builders ignore:

  • Component testing for deterministic parts
  • Trajectory evaluation for reasoning processes
  • Outcome evaluation for semantic correctness
  • System monitoring for production performance

You need to look at the fact that with the amount of agents that are added, your attack surface goes up exponentially. Here is his original post on X: https://x.com/godofprompt/status/1970418899092152672?s=66&t=vSAPngidkSaQJtTdB6pOmw

Second, A2AS: a "HTTPS for AI agents" security layer is coming

A new pre-release paper proposes A2AS, a lightweight runtime security layer for agentic AI—think HTTPS for AI agents. It hardens LLM-powered apps without adding extra hops or external guardrails. It's supported by all the big names.

The core "BASIC" model is simple and practical: Behavior Certificates (what an agent is allowed to do), Authenticated Prompts (signed inputs so you can trust the request), Security Boundaries (clear tags around untrusted data), In-Context Defenses (teach the model to ignore malicious instructions), and Code-Driven Policies (your rules as code).

The diagram on their whitepaper page five shows these controls wrapped around an agent; page 14 details the managed prompt template that instruments every message.

Why this matters: real attacks don't just jailbreak chatbots—they hijack workflows. The paper walks through three common failure modes: invoice parsing that quietly swaps in a criminal's bank account (user→agent, page 20), email triage that exfiltrates your CRM via a poisoned message (agent→tool, page 21), and log-parsing agents that spread "prompt infection" across peers like ransomware (agent→agent, page 22).

Bottom line for IT/Infosec: if you're piloting agents, start with read-only behaviors, signed prompts, boundary tags and policy checks—and actively test for prompt injection. Note there are some limits: token overhead, model variability, and today's text-only scope. Still, A2AS is a credible path to standardized runtime security for AI.

Here is their site, the A2AS paper is the first thing to download:
https://www.a2as.org/

The Invisible Threat: How Polymorphic Malware is Outsmarting Your Email Security

Approximately $350 million in preventable losses stem from polymorphic malware, malicious software that constantly changes its code to evade detection. With 18% of new malware using adaptive techniques that challenge traditional defenses, now is the time to enhance your organization's security posture.

Join us for this webinar where James McQuiggan, CISO Advisor at KnowBe4, shares valuable insights and proactive strategies to strengthen your security framework against sophisticated attacks.

In this session, you'll discover:

  • Enhanced detection strategies that go beyond traditional signature-based approaches to identify polymorphic threats before they impact your systems
  • Proactive defense frameworks specifically designed to counter the most sophisticated shape-shifting malware
  • Success stories from organizations that effectively neutralized advanced threats through strategic security improvements
  • Communication templates for building stakeholder support for security enhancements
  • Practical implementation roadmaps to strengthen your security posture against adaptive threats

Drawing from real-world scenarios and emerging threat intelligence, James will provide clear, actionable guidance for your security teams. You'll leave with a practical toolkit of strategies you can implement immediately to enhance your organization's resilience.

Date/Time: Wednesday, October 8 @ 2:00 PM (ET)

Save My Spot:
https://info.knowbe4.com/the-invisible-threat-na?partnerref=CHN

New AI-Driven Phishing Platform Automates Attack Campaigns

Researchers at Varonis warn of a new phishing automation platform called "SpamGPT" that "combines the power of generative AI with a full suite of email campaign tools."

While previous phishing kits have automated parts of the attack chain, SpamGPT's sophistication sets it apart from the rest.

"SpamGPT's interface and features imitate a professional email marketing service, but for illegal purposes," Varonis writes. "The toolkit is promoted as AI-powered, encrypted, and includes an AI marketing assistant dashboard to help create and optimize campaigns.

"The dark-themed UI features modules for campaign management, SMTP/IMAP setup, deliverability testing, and analytics — offering all the conveniences a Fortune 500 marketer might expect, but adapted for cybercrime. The creators even market SpamGPT as an all-in-one spam-as-a-service platform, blurring the line between legitimate marketing tools and weaponized automation."

While legitimate AI tools have guardrails to curb misuse, SpamGPT includes a built-in chatbot that will happily generate convincing phishing templates.

"The AI assistant (branded as 'KaliGPT' in the promo) is built into the platform and is ready to generate phishing email content and suggest optimizations," the researchers write. "This means attackers no longer need to write convincing phishing emails; they can ask the AI for persuasive scam templates, subject lines, or targeting advice within the spam toolkit."

Designed to send emails that bypass security filters.

Notably, SpamGPT's developers emphasize that the tool is designed to send emails that bypass security filters. "The platform promises guaranteed inbox delivery for popular email providers (Gmail, Outlook, Yahoo, Microsoft 365, etc.), implying that it has been fine-tuned to bypass their email filters," Varonis says.

"In other words, the toolkit doesn't just send bulk email; it engineers bulk email that lands in the inbox. Part of achieving this involves abusing trusted cloud providers like Amazon AWS or SendGrid to blend in with legitimate mail traffic. These features combine to give attackers a professional-grade spam operation at their fingertips."

KnowBe4 empowers your workforce to make smarter security decisions every day.

Blog post with links:
https://blog.knowbe4.com/new-ai-driven-phishing-platform-automates-attack-campaigns

Big News: We're now on TikTok, Instagram and YouTube Shorts!

We've just launched bite-sized security content that's short, sweet and actually useful. First course on the menu: How to spot romance scams before they steal your heart and your wallet. Finally, security training that scrolls as smoothly as your social media feed!

Follow us for the most fun way to stay security-smart!

TikTok & Instagram: @KnowBe4Inc
YouTube: @KnowBe4

Let's stay safe out there.

Warm regards,

Stu Sjouwerman, SACP
Executive Chairman
KnowBe4, Inc.

PS: KnowBe4 Named Leader in G2 Grid Fall 2025 Report in Multiple Human Risk Management Categories:
https://www.knowbe4.com/press/knowbe4-named-leader-in-g2-grid-fall-2025-report-in-multiple-human-risk-management-categories

PPS: Recommended Movie of The Week: "RELAY" on Amazon. Prime example of surprising "twist-at-the-end" social engineering!:
https://www.amazon.com/dp/B0FKJ9P62X/

Quotes of the Week  
"We should every night call ourselves to an account: What infirmity have I mastered today? What passions opposed? What temptation resisted? What virtue acquired? Our vices will abate of themselves if they be brought every day to the shrift."
- Lucius Annaeus Seneca - Philosopher, Statesman, Dramatist (5 BC - 65 AD)

"Difficulties mastered are opportunities won."
- Winston Churchill - Statesman (1874 - 1965)

Thanks for reading CyberheistNews

You can read CyberheistNews online at our Blog
https://blog.knowbe4.com/cyberheistnews-vol-15-39-watch-your-back-why-your-security-strategy-needs-a-human-upgrade-now

Security News

Attackers Use AI Development Tools to Craft Phony CAPTCHA Pages

Attackers are abusing AI-powered development platforms like Lovable, Netlify and Vercel to create and host captcha challenge websites as part of phishing campaigns, according to researchers at Trend Micro.

"Since January, Trend Micro has observed a rise in fake captcha pages hosted on such platforms," the researchers write. "These scams pose a dual threat: misleading users while evading automated security systems.

"The phishing campaigns typically begin with spam emails carrying urgent messages such as: 'Password Reset Required' or 'USPS Change of Address Notification,' which are standard tactics that are a staple of these types of attacks. Clicking the embedded URL directs the target to what appears to be a harmless captcha verification page."

If a user completes the captcha, they'll be redirected to a phishing page designed to steal their credentials. While these AI tools are usually deployed for legitimate purposes, they can be useful for attackers for the following reasons:

  • "Ease of deployment: Minimal technical skills are required to set up convincing fake captcha sites. On Lovable, attackers can use vibe coding to generate a fake captcha or phishing page, while Netlify and Vercel make it simple to integrate AI coding assistants in the CI/CD pipeline to churn out fake captcha pages.
  • Free hosting: The availability of free tiers lowers the cost of entry for launching phishing operations.
  • Legitimate branding: Domains ending in *.vercel[.]app or *.netlify[.]app inherit credibility from the platform's reputation that the attackers can leverage."

Employee training gives your organization an important layer of defense against social engineering attacks. "Educate employees on how to spot captcha based phishing attempts," the researchers write. "This includes educating them to verify URLs before interacting with captchas, use password managers (which won't autofill on phishing sites), and report suspicious pages."

KnowBe4 empowers your workforce to make smarter security decisions every day.

Blog post with links:
https://blog.knowbe4.com/attackers-use-ai-development-tools-to-craft-phony-captcha-pages

Report: Deepfake Attacks Have Targeted Nearly Two-Thirds of Organizations

A survey by Gartner found that 62% of organizations have been hit by a deepfake attack in the past twelve months, Infosecurity Magazine reports. Akif Khan, senior director at Gartner Research, told Infosecurity Magazine that deepfakes are currently being used in social engineering attacks to impersonate executives and trick employees into transferring money.

"That's trickier because social engineering is a perpetually reliable thing for attackers to use," Khan said. "When you throw deepfakes in there, your employees really are on the frontline of trying to spot something [that] is unusual. You can't just rely on automated defenses to protect you."

Additionally, the survey found that 32% of entities experienced attacks on AI applications that abused application prompts. "Chatbot assistants are vulnerable to a variety of adversarial prompting techniques, such as attackers generating prompts to manipulate large language models (LLMs) or multimodal models into generating biased or malicious output," Gartner says.

A defense-in-depth strategy can help organizations stop attacks that bypass technical defenses. As technology evolves, following security best practices remains a crucial fortification against social engineering attacks.

Khan added in a press release, "As adoption accelerates, attacks leveraging GenAI for phishing, deepfakes, and social engineering have become mainstream, while other threats — such as attacks on GenAI application infrastructure and prompt-based manipulations — are emerging and gaining traction.

"Rather than making sweeping changes or isolated investments, organizations should strengthen core controls and implement targeted measures for each new risk category."

AI-powered security awareness training gives your organization an essential layer of defense against social engineering attacks.

Infosecurity Magazine has the story:
https://www.infosecurity-magazine.com/news/deepfake-attacks-hit-twothirds-of/

What KnowBe4 Customers Say

"I am the IT Manager here, and I've had extensive experience with KnowBe4 throughout my career. However, it wasn't until joining this new company that I had the privilege of working directly with an account manager.

"Debbie O. has been truly outstanding. Her deep expertise with the platform, coupled with her professionalism and genuine commitment to supporting our team, has made her an invaluable partner. Working with her has been an absolute pleasure, and because of her dedication and excellence, I will continue to advocate for KnowBe4 at any organization I am a part of.

- A.C., IT Manager


"I wanted to take a moment to express my sincere appreciation for Britni's unwavering support and professionalism. She consistently demonstrates a high level of dedication and reliability, always making herself available when assistance is needed.

"No matter how many times I've reached out, Britni responds with a willingness to help, never hesitating to step in or provide guidance. On the rare occasions she is unable to assist directly, she ensures I'm never left without direction or next steps.

"Her proactive communication and follow through are truly commendable. Britni has been phenomenal in assisting me here at United Way. Her efforts not only reflect her personal commitment to excellence but also positively represent the values of your team.

"She is, without question, a tremendous asset, and I believe she deserves recognition for the consistent value she brings.

- S.S., Security Consultant

The 10 Interesting News Items This Week
  1. Threat Actors Spoofing the FBI IC3 Website for Possible Malicious Activity:
    https://www.ic3.gov/PSA/2025/PSA250919

  2. Chinese threat actor targets the U.S. Defense Industrial Base with spear phishing attacks:
    https://www.recordedfuture.com/research/rednovember-targets-government-defense-and-technology-organizations

  3. [NOT JASON BOURNE] U.S. Secret Service dismantles illicit telecom network in New York:
    https://www.secretservice.gov/newsroom/releases/2025/09/us-secret-service-dismantles-imminent-telecommunications-threat-new-york

  4. Iranian malware campaign targets Western Europe:
    https://research.checkpoint.com/2025/nimbus-manticore-deploys-new-malware-targeting-europe/

  5. Technology expert loses thousands to a job scam:
    https://www.mcafee.com/blogs/internet-security/how-a-tech-expert-lost-13000-to-a-job-scam/

  6. Interpol seizes $439 million stolen by cybercrime rings worldwide:
    https://www.interpol.int/News-and-Events/News/2025/USD-439-million-recovered-in-global-financial-crime-operation

  7. Feds Tie 'Scattered Spider' Duo to $115M in Ransoms:
    https://krebsonsecurity.com/2025/09/feds-tie-scattered-spider-duo-to-115m-in-ransoms/

  8. Chinese Hackers Steal Data from U.S. Legal, Tech Firms for More Than a Year:
    https://securityboulevard.com/2025/09/chinese-hackers-steal-data-from-u-s-legal-tech-firms-for-more-than-a-year/

  9. Major phishing campaign impersonates Y Combinator to target GitHub users:
    https://www.bleepingcomputer.com/news/security/github-notifications-abused-to-impersonate-y-combinator-for-crypto-theft/

  10. This Former FBI Agent Says You Should Practice 'Slow Thinking' to Protect Against Scams:
    https://corporate.visa.com/en/sites/visa-perspectives/security-trust/scam-disruption-hits-one-billion-milestone.html

Cyberheist 'Fave' Links
This Week's Links We Like, Tips, Hints and Fun Stuff

Topics: Cybercrime, KnowBe4



Subscribe to Our Blog


Gartner Magic Quadrant




Get the latest insights, trends and security news. Subscribe to CyberheistNews.