I Didn’t Revoke my API Keys Because Claude Called Me An Idiot

Javvad Malik | Mar 24, 2026

Evangelists-Javvad MalikI need to confess something. A few days ago whilst vibe coding at 2am (which can end up burning through tokens like they are going out of fashion) I accidentally pasted my API key directly into a Claude chat instead of the terminal window I had open.

Claude told me off.

It felt like a full, proper, disappointed parent tone; the AI equivalent of 'I'm not angry, just disappointed', except it absolutely was angry. There may have been paragraphs. Multiple paragraphs (at least it felt that way and that is how I’m choosing to recall the episode) about credential hygiene and security best practices and the importance of immediately revoking compromised keys.

I felt terrible. Genuinely terrible. That special kind of shame that comes from doing exactly the thing you have spent years sneering at other people for doing. Every single time I have heard about another developer leaving AWS credentials in a public GitHub repo, I have done that superior little headshake. That 'how could anyone be so careless' eye roll. That smugness that comes from believing you are somehow above such basic errors.

Turns out I am not above anything. I am just like everyone else. Mostly just fumbling through the darkness, hoping nobody notices.

After the shame came something else entirely. Anger. Proper, irrational, ego-driven anger. At a chatbot.

Because who does this son of Clippy think it is, lecturing me about security? It is a language model. A very sophisticated one, admittedly, but at its core it is just predicting the next most likely word. It has no concept of what security actually means. It has never sat through a three-hour incident response call at 4am. It has never had to explain to a board why the customer database is currently being sold on a forum with a skull logo.

It has never felt anything, because it cannot feel anything.

And yet there I was, being told off. So my immediate response was not to fix the issue, it was to yell, “How dare you?”.

I think this is how most things go down. Someone makes a mistake, and when it gets pointed out, instead of fixing it, it is easy to get defensive, angry, and want to protect your ego… leaving a vulnerability in place may feel better than admitting you are wrong.

I still have not revoked that API key.

Because why give Claude the satisfaction of being right and me being wrong?

This is insane. I know it is insane. I write about this exact behaviour for a living. I have spent years documenting the human factors that lead to security failures. I know that ego is an exploit. That pride comes before the breach. That the greatest vulnerability in any system is the bit that gets offended.

And I am still sitting here, knowingly leaving an API key exposed, because I got into an argument with a chatbot and lost.

The security industry loves to talk about technical controls. Multi-factor authentication. Zero trust architecture. Privileged access management. We have acronyms for everything. We have frameworks and standards and compliance requirements. We can scan for secrets in code. We can rotate credentials automatically. We can build systems that are theoretically impenetrable.

But none of that matters if the human running the system would rather risk a breach than admit they made a mistake.

Looking forward, this kind of human-AI interaction will only lead to more bizarre outcomes. We are already seeing people use their chatbots as personal trainers, therapists, spiritual guides, lovers, co-workers… but with each relationship that gets formed, we, as humans, are going to build emotional barriers, anger, resentment and happiness from these interactions.

I am going to revoke that API key eventually. Probably. Maybe after I finish writing this. Or maybe tomorrow. Or maybe next week, after everyone has forgotten about it and I can pretend I did it immediately and just forgot to mention it.

See? Even now, I am more concerned with how admitting the delay makes me look than with the actual security implications.

The enemy is not me (ok maybe it is my ego) and it is definitely not the AI. But there is this weird bit of friction that lies in between the human-AI interaction where as humans we would rather be right than secure. And changing that culture and mindset might be one of the first places we need to focus on. 


See AIDA in Action

Autonomous agents detect, respond, and adapt faster than humanly possible.

Request a Demo


Subscribe to Our Blog


We Train Humans & Agents




Get the latest insights, trends and security news. Subscribe to CyberheistNews.