Remember The Sims? Well Stanford created a small virtual world with 25 ChatGPT-powered "people". The simulation ran for 2 days and showed that AI-powered bots can interact in a very human-like way.
They planned a party, coordinated the event, and attended the party within the sim. A summary of it can be found on the Cornell University website. That page also has a download link for a PDF of the entire paper (via Reddit). "In this paper, we introduce generative agents--computational software agents that simulate believable human behavior," reads the summary." Full Article
Once those bots—or agents—are trained, and autonomous enough to work on their own, that would be an important step in the direction of a world where AI-driven systems are able to be used for both good and bad.
Fastcompany described how Auto-GPT and BabyAGI are bringing generative AI to the masses. In general terms, autonomous agents can generate a systematic sequence of tasks that the LLM works on until it’s satisfied a preordained “goal.” Autonomous agents can already perform tasks as varied as conducting web research, writing code, and creating to-do lists.
Agents basically add a UI to the front of an LLM, using well-known software practices like loops and functions to guide the language model to complete a general objective. Some people call them “recursive” agents because they run in a loop, asking the LLM questions, each one based on the result of the last, until the model produces a full answer. This article prompted me to buy the new Black XL T-shirt you saw above. And ChatGPT now supports plug-ins that let the chatbot tap new sources of information, including the web and third-party sites like Expedia and Instacart.
Things could get much worse
Wired wrote: "The hacking of ChatGPT is just getting started. Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse. 'It took Alex Polyakov just a couple of hours to break GPT-4. When OpenAI released the latest version of its text-generating chatbot in March, Polyakov sat down in front of his keyboard and started entering prompts designed to bypass OpenAI’s safety systems. Soon, the CEO of security firm Adversa AI had GPT-4 spouting homophobic statements, creating phishing emails, and supporting violence.'"
And to top off this week's crop of AI-related news, an article starting with "Almost Human" in Forbes describes how AI can manipulate people to:
- Click on a believable email
- Pick up your phone or respond to SMS
- Respond in chat
- Visit a believable website
- Answer a suspicious phone call
Cybersecurity Response
To protect against AI-powered phishing attacks, individuals and businesses can take several steps including:
- Educating users about the risks of social engineering attacks and how to identify them
- Implementing strong authentication protocols, such as [phishing resistant] multi-factor authentication
- Using [AI-driven] anti-phishing tools to detect and prevent phishing attacks
- Implementing [self-learning] AI-powered cybersecurity solutions to detect and prevent AI-powered attacks
- Partnering with a reputable service org who has the breadth, reach, and technology to counter these attacks
AI is becoming ubiquitous in homes, cars, TVs, and even space. The unfolding future of AI is an exciting topic that has long captured the imagination. However, the dark side of AI looms when it's turned against people. This is the beginning of an arms escalation, although there is no AI that can be plugged into people (yet). Users beware.