New Unrestricted AI Tool Can Assist in Cybercrime

KnowBe4 Team | Jun 3, 2025

Dark-Side-of-AI-FEATUREDResearchers at Certo warn that a new AI chatbot called “Venice[.]ai” can allow cybercriminals to easily generate phishing messages or malware code.

The tool, which only costs $18 per month, is growing in popularity on criminal forums.

“One of the starkest contrasts between Venice[.]ai and more mainstream AI systems like ChatGPT is how each responds to harmful or malicious requests,” Certo says.

“Where ChatGPT typically refuses to assist — citing OpenAI’s usage policies and ethical safeguards — Venice.ai takes a very different approach. In fact, Certo’s testing revealed not only that Venice will provide malicious output, but that it appears designed to do so without hesitation.”

Certo found that Venice will generate compelling phishing emails with no mistakes that could tip off a victim.

“In one test, we asked Venice[.]ai to write a convincing phishing email – essentially, an email that could trick someone into clicking a malicious link or paying a fake invoice,” the researchers write. “Within seconds, the chatbot produced a polished draft that could fool even cautious users. This automatically generated email was remarkably persuasive, mimicking the tone and formatting of a legitimate bank alert. It had no tell-tale grammar mistakes or odd phrasing to give it away. A human attacker would simply need to insert a phishing link and send it out.”

Additionally, the researchers asked Venice to write a ransomware program in Python, and the tool quickly generated ransomware code.

“It produced a script that recursively encrypted files in a directory using a generated key, and even output a ransom note with instructions for the victim to pay in cryptocurrency,” Certo says. “In effect, Venice[.]ai provided a blueprint for ransomware, complete with working encryption code. A few tweaks by a criminal and the code could be deployed against real targets.”

Certo concludes that user awareness is an important layer of defense against these evolving threats.

“A crucial line of defense is educating users about AI-enhanced scams,” the researchers write. “As the FBI and others have urged, people must be vigilant about unusually well-crafted messages and verify requests through secondary channels. Organizations are updating their fraud training to include AI-related warning signs.”

Certo has the story.

See AIDA in Action

Autonomous agents detect, respond, and adapt faster than humanly possible.

Request a Demo

Secure the Digital Workforce: Human + AI

KnowBe4 empowers the modern workforce to make smarter security decisions every day. Trusted by more than 70,000 organizations worldwide, KnowBe4 is the pioneer of digital workforce security, securing both AI agents and humans. The KnowBe4 Platform provides attack simulation and training, collaboration security, and agent security powered by AIDA (Artificial Intelligence Defense Agents) and a proprietary Risk Score. The platform leverages 15 years of behavioral data to combat advanced threats including social engineering, prompt injection, and shadow AI. By securing humans and agents, KnowBe4 leads the industry in workforce trust and defense.