Why Your Human Risk Management Strategy Can’t Ignore AI

KnowBe4 Team | Mar 25, 2026

AdobeStock_1460873969AI isn’t just another technology wave—it’s a force multiplier for both innovation and risk. In a recent webinar featuring insights from Bryan Palma and guest speaker Jinan Budge, Vice President and Research Director at Forrester, one message came through clearly: the rise of AI and AI agents is fundamentally reshaping the human risk landscape—and security leaders need to move fast to keep up.

From a 44% increase in AI-related incidents to the rapid emergence of agentic systems operating 24/7, the conversation highlighted a pivotal shift. The traditional boundaries between human risk and technology risk are dissolving. What’s replacing them is a new, blended challenge: managing risk across a workforce that now includes both humans and machines.

The Expanding Attack Surface—Fueled by AI

AI is accelerating the scale, speed, and sophistication of threats. As Palma noted, organizations are seeing a dual impact:9

  • Unintentional risk: Employees misusing AI tools, often with good intentions
  • Malicious exploitation: Threat actors weaponizing AI through deepfakes, vishing, and prompt injection

Guest speaker Jinan Budge captured the urgency:

“AI agents are trained to have infinite willpower… that’s what makes them incredible.  But it is also what makes it really important for us to have guardrails around them.”

Unlike humans, AI agents don’t sleep. They don’t pause. They don’t second-guess. That creates a dramatically expanded attack window—one that adversaries are already exploiting.

Shadow AI Is the New Shadow IT

One of the most striking insights: up to 40% of employees have already shared sensitive information with large language models—often unknowingly. This isn’t a technology problem. It’s a cultural one.

When security becomes a blocker instead of an enabler, users find workarounds. And in the age of AI, those workarounds scale fast.

“When  security becomes the department of ‘no’, inevitably, what ends up happening is that everybody is going to find a workaround, not to be bad people, but just because everyone wants to get their work done,” Budge explained.

This is where Human Risk Management (HRM) becomes critical—not just to train users, but to understand and influence behavior in real time.

AI Agents: Your New (Unmanaged) Workforce

Organizations are rapidly deploying AI agents—but without the same rigor applied to human employees. No onboarding. No background checks. No governance. Palma put it bluntly:

“We have processes for humans—screenings, training, oversight. We don’t have that for agents yet.”

That gap is creating real risk. Many organizations don’t even have a basic inventory of where AI agents exist, what they’re doing, or what data they can access.

Governance Is the Foundation—But It’s Lagging

Despite the urgency, most organizations are still playing catch-up. Formal AI governance frameworks, policies, and oversight committees are only now beginning to emerge. And that delay matters. Jinan highlighted just how far behind many organizations are.

Effective AI security requires a holistic approach, including:

  • Governance, risk, and compliance (GRC)
  • Identity and access management
  • Data security and privacy
  • Zero trust principles

This isn’t a point solution problem. It’s an organizational one.

5 Key Takeaways for Security and IT Leaders

If you didn’t attend the webinar, here are the five critical insights you need to act on now:

1. AI Is Expanding Human Risk—Not Replacing It

AI doesn’t eliminate human risk—it amplifies it. Employees are still making decisions, using tools, and introducing risk—just faster and at greater scale.

2. AI Agents Must Be Treated—and Secured—Like Employees

Securing AI agents starts with treating them like part of your workforce. That means onboarding, governance, identity controls, and continuous monitoring—just like human users. If you wouldn’t deploy a human without oversight, don’t do it with an agent. Security, accountability, and guardrails aren’t optional—they’re foundational.

3. Visibility Is Step One

You can’t secure what you can’t see. Start with a clear inventory of:

  • AI tools in use
  • Agents deployed or in development
  • Data being shared with AI systems

4. Risk Must Be Measured—Not Assumed

Securing AI agents requires moving beyond assumptions to measurable risk. Modern HRM approaches focus on behavior-based risk scoring—for both humans and, increasingly, AI agents. This enables real-time, targeted interventions instead of one-size-fits-all training.

5. Security Culture Must Evolve

The biggest shift isn’t technical—it’s cultural. Organizations must rethink what “security culture” means in a world where humans and AI agents work side by side.

The Bottom Line

AI is a massive opportunity—but only for organizations that approach it with discipline.

You can’t ignore it. You can’t block it. And you definitely can’t secure it with yesterday’s strategies. For security leaders, the path forward is clear: embrace AI, govern it rigorously, and manage human risk at the center of it all.


See KnowBe4 Human Risk Management+ in Action

Request a personalized demo today to discover how you can turn the tables on AI-powered social engineering threats.

Request a Demo



Get the latest insights, trends and security news. Subscribe to CyberheistNews.