Ever since OpenAI publicly released ChatGPT in late 2022, people have been predicting the end of programmers.
Supposedly, AI can do anything programmers can do. While I’m not convinced all programmers are going away, I wouldn’t want to be a brand new programmer, and I do think the field is definitely going to change, if not significantly shrink over time.
I’m not going out on much of a limb in saying this as almost everyone thinks this.
Microsoft CEO Satya Nadella thinks this.
Meta CEO Mark Zuckerberg thinks this.
In the early stages of AI, traditional programmers used AI to look up and help with coding. That quickly turned into AI writing code snippets. Those “snippets” are morphing into longer and longer sections of code. Today, you’re going to have a hard time getting a job as a programmer without constantly using AI as a tool to help you write code faster and/or better.
Almost all software is some sort of front-end that we interact with that interfaces with one or more databases (directly or through other services), which then brings back some promised result. My son, who works as a programmer/programmer manager for one of the most popular sites/services in the world, has long said this to me. He said he’s felt that nearly every project he has coded in his life could be recreated with about 15 different replaceable components. He absolutely thinks AI will take over coding in the near future.
One day, anyone can walk up to an AI interface and describe in their regular speaking voice what they want, and the AI will code it.
Now, where I see the rub is, “Who’s going to program the AI?”
I think we will still need programmers, but instead of writing raw code that becomes programs and services, they will write and update AI agents, who do the actual developing. Perhaps one day we will actually get “general artificial intelligence”, where the AI agents are as smart as humans and even they can write and update themselves, but that seems further off.
What Does Agentic AI Mean?
AI agents are known as agentic AI. You’ve probably heard the term agentic AI used everywhere lately. It’s the new buzzword, like AI, quantum, blockchain, metaverse and cloud used to be.
Agentic AI is the idea that you’ll have separate, autonomous AI agents, all cooperating together to create a common outcome. Think of an assembly line, but with AI software programs. Each AI agent has its particular role, making independent decisions on its own but serving an overall larger goal of the ecosystem it is involved in.
Some of the agentic AI models have already started giving names to common roles, such as:
- Director Agent (which is sort of the master AI agent controlling all the other agents)
- Input Agent (takes information from a human, cleans it up for agentic use, and transmits to the Director Agent)
- Research Agent (the agent that looks for the data needed to return an answer for a decision)
- Worker Agents (who come up with the solution)
- Publisher Agent (creates and publishes the output)
- Creator Agent (AI agent that makes physical things, if needed for solution)
The first generation of the internet was mostly static web pages displaying content. We all loved that Internet because it put the world’s knowledge within reach of a query and gave us answers. The second generation was websites with dynamic content, often powered by JavaScript. Services started to show up.
The third generation was sophisticated services, mobile phones, translation capabilities, and the moving of our offline life to online. Still, the third generation is simply giving us information one way or the other. The next generation of the Internet and AI will be the creator version.
We won’t be asking AI to write us code that we deploy. AI will write and deploy the site or service. With the previous generations, we would ask the Internet and AI how to do something. For example, how can I start a profitable online business with Amazon and make $100K a year? [I’m stealing this example from someone I heard on a podcast…I can’t remember who]. Today’s AI-internet can tell you how to do that, but it’s up to you to make it happen.
The next generation of AI-enabled internet will simply make the business. It will fill out all the necessary business and tax paperwork, interface with Amazon, buy and sell what it needs, and send the profit to us. The future agentic Internet will interface with sophisticated 3D printers and make what we want: clothing, things, and food. Agentic AI will be a creator.
But we will still need programmers to write the agentic AI that makes the things.
We will also need people to write AI agents that troubleshoot, secure, and protect agentic AI. Take everything we do today and imagine creating an AI agent for it. We will need agents to ensure strong and secure authentication. We will need agents to write secure code, without mistakes, like memory-type mismatches and hard-coded credentials.
We will need agents to test our agents. We will need agents that write secure APIs and create more accurate biometrics. If there is a security feature today, we’ll need to create an AI agent for it. And we will need programmers to write those agents.
We will likely discover entirely new classes of security vulnerabilities that are specific to agentic AI that did not exist in previous types of ecosystems. This happened with the cloud. When the cloud came out, we discovered lots of cloud-only threats, such as things that can only happen in multi-tenant models (such as mistakenly not erased data stored on shared storage areas).
But cloud security also meant learning and applying all the security lessons we learned from our on-premises traditional attacks (e.g., social engineering, unpatched software, overly permissive permissions, etc.), plus all the security issues related to virtual machines (VM).
Since most cloud ecosystems use VMs, we had to learn about and protect against host-to-guest, guest-to-host, and guest-to-guest VM-only vulnerabilities. Add to that all the new threats and vulnerabilities included in microservices and containers, and making clouds as secure as they can be takes a lot of work. It’s everything we knew before, plus the new stuff.
I’ve yet to meet the security paradigm that makes security easier. Will agentic AI be any different? Probably not. We will have to use the lessons learned from on-premises, cloud, VM, microservices and containers, and then add on all the new agentic AI stuff. Maybe if we are lucky, like we were with the cloud, most of the brand new cloud-only threats never (so far) became a big problem. The biggest security issues in the cloud impact most companies today are the same issues we worried about on-premises. We just had to learn how to recognize and manage them in the cloud.
What do you see when you think about agentic AI threats?
I’ve got one I’ll share in the next posting.