AI-to-AI Communication and Secret AI Code Must Be Stopped At All Costs

Roger Grimes | Mar 9, 2026

Evangelists-Roger GrimesAs I wrote in my recent book, How AI and Quantum Impacts Cyber Threats and Defenses, as we humans use AI more and more, AI will begin to communicate with itself using new AI-only communication methods that humans cannot easily see or read.

If there is no human-readable audit trail or code, this is a very, very bad thing. It must be stopped at all costs.

Humans are absolutely beginning to use AI more and more to do things they used to do manually. Soon, we will all be using multiple AI agents. Our desktops and entire lives will involve interacting with AI agents. AI agents will read our email, manage our calendar, do tasks for us, and evolve to do more and more for us.

For example, if we want to take an extended vacation trip today, most of us get on the web and go to flight, hotel, and car rental websites (or their equivalents). In the near future, most of us will just ask our desktop AI to make the trip plans for us. It will know what Airline and airport you prefer, what type of seats (aisle, window, premium), what type of hotels you like, or if you would rather use Airbnbs, whether you prefer car rentals (and if so, what type of vehicle) or Uber. Let us be honest. It is a pain that you have to constantly make your own travel plans over and over, choosing the same options most of the time. Pretty soon, your AI agent will just do it for you, and most of the time, do a pretty good job. 

Browsing by humans will significantly decrease over time. Most websites will start to be re-coded to be optimized for AI-to-AI communications, often using AI-specific APIs like MCP and A2A. Our kids and grandkids will likely not be the big Internet browsers that we are today. 

This “AI-first” approach is going to be constantly pushed to have more efficient means for AI to communicate with other AI – humans be damned. 

From an efficiency-only perspective, it just makes sense. Why communicate and make things that humans can more easily understand when the bulk of communications and coding will be AI-facing, not human-facing? I get it. 

Elon Musk (and others) think that AI, which will soon be doing the majority of the coding (if not already), will morph to coding using a method that is not easily human-understandable. Maybe it just codes in assembler or binary. Maybe it creates its own AI-understandable coding language. Who knows? But it will not be coding in any programming language we use today. AI will just get its assigned task and implement the needed binaries and/or site coding. 

The problem is that, without doubt,this will lead to very bad things. And I do not mean Skynet becoming self-aware and killing us all (although it does not exactly prevent that scenario either). If AI is doing all the coding and that coding is done in secret, in a language we humans can not understand, it will lead to global catastrophic vulnerabilities and breakdowns that the AI cannot repair on its own (at least anytime soon). 

There is not an AI “alive” that doesn’t have hallucinations or could not be maliciously “poisoned” in many different ways. If AI starts only coding with other AIs, the code they produce will be the ultimate hive mind, reusing each other's code in a way that would look overly familiar to GitHub users…except on steroids.

The global collective AI programming hive will eventually lead to big vulnerabilities and operational interruptions that we have so far not even come close to experiencing. If you think what happened during CrowdStrike’s debacle was bad, you have not seen anything yet.

And when that global operational interruption happens…and it will happen…the AI will not be able to get out of its own way and repair the cause. It will be up to humans to repair it. That is if we have human programmers still left to help out.

In order for the humans to help out and save us they have to be able to figure out what the AIs did wrong. That means giving the humans code they can read and understand. Sure, we can hope the AI can convert its machine-language code to something human-readable in an emergency, assuming they are even still operational and can respond to that task. But it would be so much more helpful if the AIs either always wrote in human-understandable code and documented and commented on it, or at least there was some audit trail to allow the humans to fix what the AI messed up.

We should never allow AIs to communicate in AI-only languages and coding where there is not also very good human-readable code, documentation, and/or a great audit trail.

Personally, I am in favor of forcing the AIs to always communicate and write in human-understandable code and bits. I will live with the efficiency hit. Some things are more important than efficiency. 

We need human-in-the-loop at all critical tasks and decisions. That includes the ability to review AI communications, code, and actions. Allowing AI to have secret communications will result in some terrible future outcomes, for sure.

You may not have the ability to impact your organization’s use of AI coding tools that use AI-only communication and coding. That is a very likely future scenario. In that case, you have to push for whatever audit trails and accountability you can get.

If you use AI agents in any capacity, at the very least, you want an inventory of those AIs (no shadow AI in your environment), you want an audit trail of what those AIs do, and you want to know what critical tasks they do with privileged credentials. You should have these things regardless of whether AI is doing anything in secret. It should be standard. 

When you see people talking about and promoting AI-only understood communication and coding because of the great efficiencies it will create...push back! Make sure your environment has documented code, accountability, and audit trails.


AI-Powered Security Awareness Training Demo

KnowBe4 AIDA — Artificial Intelligence Defense Agents: a suite of agents that up-levels your approach to human risk management.

AIDA Logo

With AIDA you can:

  • Ensure your SAT is consistent with your organization’s broader security initiatives by aligning with the NIST Phish Scale Framework
  • Dramatically free up your security team's time by reducing how long it takes your admins to create remedial training
  • Improve relationships between your security team and other departments by ensuring users are aligned with security objectives
  • Ensure flexibility in your security budget to invest in other key initiatives by actively managing human risk
  • Maximize the value of your existing security tech stack with AIDA’s seamless integrations

Request A Demo

Topics: AI



Subscribe to Our Blog


We Train Humans & Agents




Get the latest insights, trends and security news. Subscribe to CyberheistNews.