The human layer is not impacted by Anthropic's Mythos Preview announcement. If anything, it is reinforced, and for reasons that deserve to be spelled out clearly.
What Anthropic Announced
The significantly new development is not vulnerability discovery. Machine-assisted bug hunting has existed for years, and Google's Big Sleep already surfaced a real-world SQLite vulnerability in 2024. What is new is autonomous exploit chaining at scale. Where the previous model (Opus 4.6) had a near-zero autonomous exploit success rate, Mythos Preview reaches 72.4%. In Anthropic's own framing, the model surpasses all but the most skilled human security researchers. In other words, the model does not only "find bugs" but also "writes working exploits without human intervention". That's the real news.
Anthropic is not releasing Mythos Preview to the general public. Instead, the company has launched Project Glasswing, a coalition of more than 40 organisations, including Amazon Web Services (AWS), Apple, Google, Microsoft, CrowdStrike, Cisco, JPMorgan Chase, the Linux Foundation, Nvidia, Broadcom, and Palo Alto Networks, that will use the model defensively to find and patch vulnerabilities in critical infrastructure. Anthropic is committing up to $100M in usage credits and $4M in direct donations to open-source security organisations.
Not everyone is convinced. Heidy Khlaaf of the AI Now Institute, experts like Marcus Hutchins, and others have cautioned against taking the claims at face value without disclosure of false-positive rates and human-review methodology.
Why This Matters: Sophistication, Speed, and Scale
- Sophistication. Mythos Preview surfaces decades-old zero-days that fuzzers and human reviewers have missed for decades. It chains them into working exploits autonomously.
- Speed. Industry estimates suggest zero-days can live for years before detection, while organisations take weeks to patch them once disclosed. The first compromises typically occur within minutes to 24 hours after release. Artificial Intelligence (AI) models like Mythos compress this window dramatically.
- Scale. Mythos Preview discovered thousands of zero-days in weeks, with the ability to weaponise and deploy at scale within minutes.
The Human as the Most Important Layer
First, initial access. Phishing, business email compromise, and social engineering remain the dominant initial access vectors; regardless of how good autonomous vulnerability discovery becomes. Mythos does not change that. A zero-day exploit chain still needs initial access, and that is usually achieved by a person clicking, approving, or trusting something they should not. I certainly expect attackers to double down on the human as an initial access vector, not retreat from it as technical defenses improve.
Second, human judgment. AI agents and autonomous defensive tooling generate findings at machine speed, and most of those findings still require human contextual judgment to act on. Triage, prioritisation, and the decision of when to take a system offline are not problems that resolve at the model layer.
Third, accountability and oversight. As organisations deploy AI agents inside their own environments, someone has to own the outcomes those agents produce. In any corporation that accountability is assigned to a human.
Humans Must Safely Interact with AI Agents
- Human intuition and machine intelligence must collaborate to detect the most sophisticated attacks
- Human oversight and accountability for business processes become key requirements
Human AI and cybersecurity literacy becomes even more important as human actions become part of human-AI value-creation processes
Security awareness and human risk management are not legacy controls. They are the layer that holds when the technical layers are outpaced.
Essential Capabilities and Behavioural Analytics
Advances in AI like the Mythos Preview being able to discover vulnerabilities and to develop exploit chains repeatedly underline the need to pivot from a gatekeeping to a behavior profiling perspective. As attacks get more sophisticated, faster, and more frequent, organisations must establish a comprehensive AI and cybersecurity governance program based on transparency and oversight.
-
Establish strong observability and least-agency principles for AI agent development
-
Reduce shadow Information Technology (IT) and shadow AI
-
Understand "normal" behavior and communication patterns inside your network and software stack, crucially also for AI agents
-
Prepare for intervention and contingency to reduce initial impact and blast radius when something goes wrong
What Organisations Should Do
Technical hygiene remains essential, and Anthropic's press release means it has to be tightened now, not relaxed:
-
Ensure patch management capabilities at the highest level with quickest turnaround times possible
-
Invest in cybersecurity oversight and monitoring to reduce Mean-Time-To-Detect (MTTD) and Mean-Time-To-Contain (MTTC)
-
Develop solid backup and recovery strategies to reduce Mean-Time-To-Recover (MTTR)
