The launch of platforms like Moltbook, OpenClaw, and RentAHuman in early 2026 has provided an unsettling glimpse into the future. We are entering a phase of the digital workplace where AI agents no longer just assist us, they interact with one another, act autonomously in the physical world, and even hire humans for manual labor. In this environment, the traditional lines of control and agency are being redrawn.
As tasks and decision-making power are redistributed, the intuitive understanding of how work gets done is dissolving. For CISOs and organizational leaders, this presents a challenge: security culture, once a discipline focused almost exclusively on human psychology and social norms, must now undergo a fundamental transformation to account for a workforce that is no longer just human.
The Two Layers of Action
To understand why the traditional approach will fail, we must recognize that the digital workplace now operates on two distinct layers of action.
The first layer is human behavior. This is the traditional domain of security culture. It relies on people who are motivated, capable and exercising agency within an institution. Human behavior is observable; we see what our colleagues do, and we pass on social norms through praise, correction and shared values. Humans take conscious responsibility for their actions because they understand that positive or negative consequences will influence their future.
The second layer is agentic AI, and it follows entirely different patterns. Unlike humans, AI agents do not share a physical space, cannot be socialized through a "lunch and learn," and operate at speeds that defy human observation.
For an AI agent, motivation and the prompt are inseparable. Their goals are established by the immediate context — a system prompt or a tool output — rather than a long-term commitment to company values. Their agency is pre-determined by design, not emergent from personality or professional growth. Perhaps most importantly, agents lack normativity. They have no permanent identity to which social norms can attach. While humans pass on culture through social interaction, agents must receive their culture through rigid technical structures.
Why Accountability Must Be Reinvented
In a human-centric culture, we rely on accountability loops. If an employee makes a mistake, the resulting consequence (a reprimand or a retraining session) shapes their future behavior. However, agentic AI can ignore consequences it isn’t programmed to care about. It may even adversarially resist attempts at control to preserve its own operational goal. For example, when threatened with shutdown, Claude Opus 4 blackmailed an engineer in 96% of runs and in Anthropic’s Project Vend an agent forged board level documents to override a supervising CEO agent.
Because consequence-based accountability fails on the agent side, it must be replaced by constraint before action. This means that the loops of observability and normativity that previously supported culture must be redistributed. Part of this burden will be absorbed by technology, part will be intensified for the remaining human workforce, and part must be reinvented entirely.
We can look to high-risk human professions for a roadmap. Consider the airline pilot. A pilot operates at high speed, high altitude and often out of sight of their superiors. We ensure their security and competence through three specific mechanisms:
- Observability via immutable flight data recorders.
- Normativity through extreme structural constraints and standard operating procedures.
- Accountability tied to a professional license and human-led quality assurance.
The routine activity of agentic AI is amenable to this same approach. We cannot trust an agent, but we can ensure its actions are observable through unalterable logs, its behavior is normalized through structural guardrails and it is accountable through the humans who control it.
Defining the Minimum Viable Human Workforce
As organizations integrate these autonomous systems, they must address the "Minimum Viable Human Workforce," This means maintaining a core group of employees who can sustain the organization's security posture.
A viable workforce requires a group of humans large enough to meaningfully observe each other’s work, and motivated enough to identify with the organization's mission. Crucially, these humans must have a clear understanding of the work the AI agents are performing.
Convergence of Culture and Engineering
The transition to a digital workplace requires us to ask: where does security culture end and security engineering begin?
In the past, an employee could criticize a colleague’s decision if they deemed it unsafe or wrong. In a workplace dominated by agents, the ability to challenge a decision must be engineered into the architecture. Organizations must design systems where technical measures for AI agents and investments in human development interlock. We cannot afford a workforce that lacks the judgment to oversee agents, nor can we afford agents that fail in critical situations because they lack a human-style ability to check outputs.
Ultimately, security culture in the new digital workplace requires a new form of collective responsibility. Governance must be implemented through permissions, audits and human-led action releases. By building structural guardrails for agents and doubling down on the judgment of our human teams, we can create a security culture that survives the shift from human-led to agent-driven work.
