CyberheistNews Vol 16 #09 Fake Video Meeting Invites Trick Users Into Installing RMM Tools

KnowBe4 Team | Mar 3, 2026
Cyberheist News

CyberheistNews Vol 16 #09  |   March 3rd, 2026

Fake Video Meeting Invites Trick Users Into Installing RMM Tools

Threat actors are using phony meeting invites for Zoom, Microsoft Teams, Google Meet and other video conferencing applications to trick users into installing remote monitoring and management (RMM) tools, according to researchers at Netskope.

The invites lead to convincingly spoofed landing pages for fake video meetings, complete with a list of coworkers who have supposedly already joined the call. The page instructs the user to install a software update in order to join the video meeting.

"The payload, disguised as a software update, is a digitally signed remote monitoring and management (RMM) tool such as Datto RMM, LogMeIn or ScreenConnect," the researchers write.

"These tools empower attackers to remotely access victims' machines and gain full administrative control over their endpoints, potentially leading to data theft or the deployment of more destructive malware."

Since the meeting appears to have already started, users are more likely to ignore red flags and quickly install the phony update.

"As victims attempt to join the call, they are presented with a notification indicating that their application is out of date or incompatible," Netskope says. "To proceed, victims must download and execute a provided 'update' before being allowed to join.

"By framing the malicious payload as a critical technical fix for a legitimate business task, attackers increase the likelihood that users will manually bypass security warnings in order to avoid missing the session."

These RMM tools have legitimate uses and are digitally signed, so they're more likely to evade detection by security tools. "By deploying legitimate, digitally signed RMM tools rather than custom malware, the attackers can blend in with standard corporate traffic," the researchers write.

"These tools can be pre-approved in enterprise environments, allowing the attackers to bypass signature-based security controls and gain a persistent administrative foothold without raising immediate alarms."

AI-powered security awareness training can give your organization an essential layer of defense by teaching your employees how to recognize social engineering attacks. KnowBe4 empowers your workforce to make smarter security decisions every day. Over 70,000 organizations worldwide trust the KnowBe4 HRM+ platform to strengthen their security culture and reduce human risk.

Blog post with links:
https://blog.knowbe4.com/fake-video-meeting-invites-trick-users-into-installing-rmm-tools

[NEW] AI Agents that Train, Test and Adapt to Manage Human Risk

Traditional security awareness training was built for a simpler time, but AI has changed the rules. With the rise of hyper-realistic deepfakes and sophisticated vishing, your attack surface is expanding faster than your team can keep up. When you're resource-constrained, trying to change user behavior and accurately measure risk can feel like an impossible uphill battle.

Join us for a live demo to see how KnowBe4's AIDA (AI Defense Agents) transforms your security culture from a manual, one-size-fits-all approach into a fully orchestrated, autonomous defense system that adapts to user behavior in real-time.

  • NEW! AIDA Orchestration - Stop spending time on manual campaign management with an always-on, AI-driven system that autonomously manages and personalizes your phishing simulations and security awareness training
  • Deepfake Training Content - Generate hyper-realistic deepfakes of your own executives to prepare users to spot AI-driven manipulation and deepfakes
  • Template Generation and Recommended Landing Pages Agents - Automatically create convincing phishing simulations, including modern attack styles like callback phishing, paired with the most relevant landing page
  • Automated Training Agent - Automatically identify high-risk users and assign personalized training
  • Knowledge Refresher Agent and Policy Quiz Agents - Reinforce your security program and organizational policies

Discover how these powerful AI agents work together to dramatically reduce your organization's risk while saving your team valuable time.

Date/Time: TOMORROW, Wednesday, March 4, @ 2:00 PM (ET)

Save My Spot:
https://info.knowbe4.com/hrm-aida-3?partnerref=CHN2

We Are Thrilled to Announce That AIDA Orchestration Is Officially Live!

As attackers scale their efforts with AI-powered social engineering, you need a defense that can keep pace. AIDA Orchestration is our first truly autonomous capability, transforming security awareness from a manual process into an always-on, adaptive system.

Why This is a Game-Changer for You
Managing a security awareness program manually is time-consuming and often generic. AIDA Orchestration acts as the "Moderator Agent" for a human risk management program, coordinating other agents to ensure the right user gets the right training at the right time.

Hyper-Personalization:
Analyzes 316 individual indicators and 37 risk factors to tailor phishing simulations and training to each user's unique risk profile.

Reduced Administrative Burden:
Automates the full lifecycle of campaigns—reducing setup time from hours to mere seconds.

Unmatched Risk Reduction:
Customers using AIDA have seen significant reductions in their overall organizational risk scores.

Here is the page and be sure to attend the webinar tomorrow, SEE ABOVE:
https://www.knowbe4.com/products/aida

The Convergence: Why Your Human Risk Management Strategy Can’t Ignore AI

The workplace is no longer just humans. If not already, your organization will soon manage a hybrid workforce of humans and AI agents working alongside your employees, accessing systems and making decisions. And both are targets!

Join us for an exclusive discussion between guest speaker Jinan Budge, VP & Research Director at Forrester, and Bryan Palma, President & CEO of KnowBe4. Together, they will explore the urgency of AI adoption and the seismic shift currently occurring in human risk management. This category emerged specifically to overcome the shortcomings of security awareness training in the medium term. But when AI agents can be prompt-engineered just as easily as humans can be socially engineered, your security strategy needs to evolve.

You'll discover:

  • The current state of human risk management
  • Why traditional one-size-fits-all security awareness training fails to change behavior or prepare people for AI threats
  • The convergence of human and AI vulnerabilities and how phishing, deepfakes and prompt-engineered attacks exploit the same trust mechanisms whether the target is a human or an AI agent
  • How to detect and report on human and human-to-AI risk with business-ready insights leadership can understand and act upon
  • Practical first steps to build security programs that protect humans and agents, reduce manual overhead and scale with AI adoption

You'll leave with a clear understanding of where HRM is headed, how to measure and manage human risk at scale and concrete steps to secure your workforce.

Date/Time: Wednesday, March 11, @ 2:00 PM (ET)

Save My Spot:
https://info.knowbe4.com/ai-human-risk-management-webinar?partnerref=CHN

Integrated Cloud Email Security (ICES) vs Secure Email Gateway (SEG)

By James Dyer, KnowBe4 Threat Intelligence Lead

Cybercriminals continually evolve their techniques, leading to more successful phishing attacks. Using techniques such as text-based attacks that utilize social engineering and highly targeted spear phishing, bad actors can bypass traditional email security and land in their target's inbox.

The 2023 Gartner Market Guide for Email Security states: "Impersonation and account takeover attacks via business email compromise (BEC) are increasing and causing direct financial loss, as users place too much trust in the identities associated with email, which is inherently vulnerable to deception and social engineering."

Gartner recommends that organizations should, "use email security solutions that include anti-phishing technology for targeted BEC protection that use AI to detect communication patterns and conversation-style anomalies, as well as computer vision for inspecting suspect URLs.

"Select products that can provide strong supply chain and AI-driven contact chain analysis for deeper inspection and can detect socially engineered, impersonated or BEC attacks."

Consequently, it is important for organizations to implement the right email security for their needs, protecting them from both inbound and outbound threats.

Read the Top five FAQ at this blog post which covers:

  • What is the main difference between a SEG and ICES?
  • Why are organizations replacing SEGs with ICES?
  • Does a SEG protect against internal email threats?
  • Is it difficult to deploy an ICES solution compared to a SEG?
  • Can I use both a SEG and an ICES solution together?

[CONTINUED] Blog post with links:
https://blog.knowbe4.com/ices-vs-seg-email-security

Is Your Domain Vulnerable to Spoofing Attacks?

Domain spoofing is a hacker's ultimate disguise. By masking an email to look like it's coming from your CEO, executive or trusted colleague, cybercriminals exploit the inherent trust of your employees. If your mail server isn't configured to stop these unauthorized senders, a spear-phishing attack becomes inevitable.

Is your email security actually protecting your organization, or are you leaving the door wide open to your employees' inboxes?

Stop guessing. Start testing.

Our free Domain Spoof Test is a simple, non-intrusive "pass/fail" assessment that simulates a real-world spoofing attempt.

How it works:

  1. Sign up with your business email.*
  2. Identify the gap: Within 48 hours, we'll send a spoofed test email "from you to you."
  3. Assess the risk: If the email makes it into your inbox, your domain is at risk for domain spoofing attacks. If we can spoof your domain, so can attackers.
  4. Build your defense: Once the gap is visible, we show you how to block these threats with AI-driven automation and train users to identify complex attacks that bypass filters.

Sign Up Now:
https://info.knowbe4.com/free-cybersecurity-tools/domain-spoof-test-chn

NOTE: *This is a professional diagnostic tool. To qualify, you must be the person responsible for your organization's email security. Requests from personal domains (Gmail, AOL, Yahoo etc.) are not accepted.

Google Reports On Adversarial Use of AI in Late 2025

By Roger Grimes

Google Threat Intelligence Group recently released its latest report, "GTIG AI Threat Tracker: Distillation, Experimentation and (Continued) Integration of AI for Adversarial Us," on how malicious adversaries are using AI to commit cybercrimes.

Google's size, central axis, threat intelligence and assessment make their report among the best out there. You can rely on what they are seeing and concluding as a very good proxy for state-of-the-art AI-enabled cybercrime, especially at scale and committed by nation-states.

There may be a few pockets of smaller groups of cyber criminals or individuals using AI in other, more advanced ways, but Google is telling you what is happening broadly in the real world. It is what most of us have to worry about. How nation-states are using AI maliciously is a canary in the coal mine for the rest of us.

Google is seeing increasing use and sophistication of AI by our adversaries over time, including in social engineering.

Selected takeaways: Findings Related to Social Engineering

  • (LLMs) have become essential tools for technical research, targeting and the rapid generation of nuanced phishing lures
  • AI is used to create hyper-personalized phishing messages
  • Attackers are using AI to create "rapport building" phishing
  • AI is used to research potential targets, including using AI to do OSINT research
  • AI was used to target and research specific individuals

Other findings:

  • AI-generated malware is becoming more common
  • AI is used to write malicious code and scripts
  • AI is used to research known vulnerabilities
  • Intellectual property theft appears to be a big motivator and is usually accomplished against private users of AI (versus against large-scale AI)
  • Adversaries are creating or using services that "jailbreak" legitimate AI, MCP components and other legitimate AI APIs, so that they can be used for malicious purposes
  • Google detected attacks against their public AI where the attackers were trying to better understand Google AI's logic and reasoning (Google calls this 'model extraction' and 'distillation attacks')

Possibly the only good note was that Google has not seen an attack that fundamentally changes the threatscape (i.e., AI is being used to do traditional attacks). AI is just being used to do it more pervasively, more personalized and with fewer mistakes.

According to a recent Chainalysis report, AI-enabled cybercrimes were able to steal 4.5X more value. Let that sink in!

[CONTINUED] At the KnowBe4 blog:
https://blog.knowbe4.com/google-reports-on-adversarial-use-of-ai-in-late-2025


Let's stay safe out there.

Warm regards,

Stu Sjouwerman, SACP
Executive Chairman
KnowBe4, Inc.

PS: Introducing the AIDA Orchestration Agent: Always-On Human Risk Management Has Arrived:
https://blog.knowbe4.com/introducing-the-aida-orchestration-agent-alway-on-human-risk-management-has-arrived

PPS: [AMMO TO SEND TO YOUR USERS] What Happens If I Click A Phishing Link?:
https://blog.knowbe4.com/what-happens-click-phishing-link

Quotes of the Week  
"Nobody can give you wiser advice than yourself."
- Marcus Tullius Cicero - Orator and Statesman (106 - 43 BC)

"Honesty is the first chapter in the book of wisdom."
- Thomas Jefferson - Principal author of the Declaration of Independence and 3rd US President (1743 - 1826)

Thanks for reading CyberheistNews

You can read CyberheistNews online at our Blog
https://blog.knowbe4.com/cyberheistnews-vol-16-09-fake-video-meeting-invites-trick-users-into-installing-rmm-tools

Security News

Nation-State Threat Actors Incorporate AI to Streamline Attacks

Researchers at Google's Threat Intelligence Group (GTIG) warn that nation-state threat actors have adopted Gemini and other AI tools as essential components of their operations. The threat actors are using tools to conduct research and reconnaissance, target victims and rapidly create phishing lures.

"Increasingly, threat actors now leverage LLMs to generate hyper-personalized, culturally nuanced lures that can mirror the professional tone of a target organization or local language," the researchers write.

"This capability extends beyond simple email generation into 'rapport-building phishing,' where models are used to maintain multi-turn, believable conversations with victims to build trust before a malicious payload is ever delivered. By lowering the barrier to entry for non-native speakers and automating the creation of high-quality content, adversaries can largely erase those "tells" and improve the effectiveness of their social engineering efforts."

Threat actors also abused a wide range of AI tools to host malicious commands for ClickFix social engineering attacks. The attackers bypassed safety guardrails used by ChatGPT, CoPilot, DeepSeek, Gemini, Grok and others.

"While not a new malware technique, GTIG observed instances in which threat actors abused the public's trust in generative AI services to attempt to deliver malware," the researchers write. "GTIG identified a novel campaign where threat actors are leveraging the public sharing feature of generative AI services, including Gemini, to host deceptive social engineering content."

"This activity, first observed in early December 2025, attempts to trick users into installing malware via the well-established "ClickFix" technique. This ClickFix technique is used to socially engineer users to copy and paste a malicious command into the command terminal."

KnowBe4 empowers your workforce to make smarter security decisions every day.

Blog post with links:
https://blog.knowbe4.com/nation-state-threat-actors-incorporate-ai-to-streamline-attacks

Attackers Are Using Emojis to Hide Malicious Content

Attackers are using emojis to hide malicious code and evade security filters, according to researchers at SOS Intelligence. The threat actors are exploiting the way Unicode works, since many security systems aren't designed to detect these attacks.

"When you type a letter, number or emoji, your computer doesn't actually store that visual symbol," the researchers explain. "Instead, it stores a number that represents that character. This system is called Unicode, and it's what allows your computer to display everything from English letters to Chinese characters to emoji.

"For example, when you use the fire emoji, your computer stores it as the number U+1F525. Every character you can type has its own unique number in the Unicode system. This is brilliant for international communication, but it also creates opportunities for attackers."

Attackers are exploiting this technique in several ways, including using lookalike characters in phishing URLs, planting invisible characters to disguise malicious text and using emojis to hide malware traffic. The researchers note that it's difficult to defend against these attacks because Unicode has legitimate, important functionalities.

"Most security tools were built to detect patterns in regular ASCII text (the basic English letters, numbers and symbols)," the researchers write. "They look for suspicious keywords, known malicious code patterns or dangerous file types. But when attackers encode their attacks using Unicode tricks, these patterns become unrecognizable to the security system.

"Additionally, completely blocking Unicode would break legitimate functionality. Businesses operate globally, users have names in different languages and emojis are a standard part of modern communication. Security teams can't simply ban all non-English characters without severely impacting usability."

Attackers are always looking for new ways to evade security defenses so they can target humans directly. AI-powered security awareness training can give your organization an essential layer of defense against social engineering attacks.

SOS Intelligence has the story:
https://sosintel.co.uk/emoji-smuggling-hiding-malicious-code-in-plain-sight/

What KnowBe4 Customers Say

"I would like to take a moment to express my sincere appreciation for the outstanding work Kelli has been doing in supporting us. Your dedication, responsiveness and exceptional support are truly valued. We would like to formally recognize and thank you for the excellent service you have provided.

"Even when we experienced some technical challenges at the end of last year and had other options available, we chose to renew with KnowBe4 for another year largely because of the excellent support Kelli has consistently provided.

"We look forward to continuing our partnership and working together to further strengthen our security posture."

- L.B., PhD, MBA, CISSP, Chief Information Security Officer

The 10 Interesting News Items This Week
  1. Amazon: AI-assisted hacker breached 600 Fortinet firewalls in 5 weeks:
    https://www.bleepingcomputer.com/news/security/amazon-ai-assisted-hacker-breached-600-fortigate-firewalls-in-5-weeks/

  2. Cyber Association ISC2 launches code of conduct for security pros:
    https://www.computerweekly.com/news/366639403/Cyber-association-launches-code-of-conduct-for-security-pros

  3. UAE claims it stopped ‘terrorist’ ransomware attack:
    https://therecord.media/uae-claims-it-stopped-terrorist-ransomware-attack

  4. Phishing operation with links to Russia, Armenia compromised Western cargo companies, researchers find:
    https://therecord.media/phishing-operation-russia-armenia-targeting-us-european-cargo

  5. Ukrainian Gets 5 Years in US Prison for Aiding North Korean IT Fraud:
    https://www.securityweek.com/ukrainian-gets-5-years-in-us-prison-for-aiding-north-korean-it-fraud/

  6. Ransomware attacks rose in 2025 despite decrease in payments:
    https://therecord.media/ransomware-payments-chainalysis-cybercrime/

  7. Cost of Insider Incidents Surges 20% to Nearly $20m:
    https://www.infosecurity-magazine.com/news/cost-of-insider-incidents-surges/

  8. Faking it on the phone: How to tell if a voice call is AI or not:
    https://www.welivesecurity.com/en/business-security/faking-it-phone-how-tell-voice-call-ai/

  9. Google Disrupts Chinese Hackers Targeting Telecoms, Governments:
    https://www.securityweek.com/google-disrupts-chinese-cyberespionage-campaign-targeting-telecoms-governments/

  10. Scattered Lapsus$ Hunters gang is recruiting women to launch vishing attacks:
    https://www.helpnetsecurity.com/2026/02/26/slh-seeks-women-for-vishing-attacks/

Cyberheist 'Fave' Links
This Week's Links We Like, Tips, Hints and Fun Stuff

Topics: Cybercrime, KnowBe4



Subscribe to Our Blog


We Train Humans & Agents




Get the latest insights, trends and security news. Subscribe to CyberheistNews.