Human Risk Management Blog

Artificial Intelligence

Opinion and analysis on the latest developments in artificial intelligence as they relate to cybersecurity and how stay protected.

Researchers uncover surprising method to hack the guardrails of LLMs

Researchers from Carnegie Mellon University and the Center for A.I. Safety have discovered a new prompt injection method to override the guardrails of large language models (LLMs). These ...

Facebook Scams Impersonate AI Tools

Fraudsters are spreading scams on Facebook that pose as ads for legitimate AI tools, according to researchers at Check Point. The Facebook pages impersonate ChatGPT, Google Bard, ...

[HEADS UP] See WormGPT, the new "ethics-free" Cyber Crime attack tool

A new generative AI model called “WormGPT” is being offered on cybercrime forums, according to researchers at SlashNext. While other AI tools, such as ChatGPT, have safeguards in place ...

Is AI-Generated Disinformation on Steroids About To Become a Real Threat for Organizations?

A researcher was alerted to a fake website containing fake quotes that appeared to be written by himself. The age of generative artificial intelligence (AI) toying with our public ...

Takeaways From a Threat Intelligence Specialist on Artificial Intelligence Being a 'Double-Edged Sword'

While artificial intelligence (AI) has been the hot topic of this year, a theme that I continue to see is that AI is being used for good and evil.

Forrester: AI, Cloud Computing, and Geopolitics are Emerging Cyberthreats in 2023

Wouldn’t it be great if your cybersecurity strategy only had to focus on just a few threats? Sigh… if only life were that easy. But new predictions for this year’s most prevalent cyber ...

AI Voice-Based Scams Rise as One-Third of Victims Can’t Tell if the Voice is Real or Not

As audio deepfake technology continues to go mainstream as part of the evolution in AI-based tools, new data shows there are plenty of victims and they aren’t prepared for such an attack.

[EPIC AI FAIL] Lawyer cites fake cases invented by ChatGPT

Found this highly amusing article: Legal Twitter is having tremendous fun right now reviewing the latest documents from the case Mata v. Avianca, Inc. (1:22-cv-01461). Here’s a neat ...

AI-generated Disinformation Dipped The Markets Yesterday

The Insider reported that an apparently AI-generated photo faking an explosion near the Pentagon in D.C. went viral. The Arlington Police Department confirmed that the image and ...

[Feet on the Ground] Stepping Carefully When Making an AI Your BFF

Bloomberg's Brad Stone wrote an op-ed covering this topic. In the past month, a chatbot called "My AI" or "Sage" has appeared as a new friend for several hundred million Snapchat users. ...

Heart of the Matter: How LLMs Can Show Political Bias in Their Outputs

Wired just published an interesting story about political bias that can show up in LLM's due to their training. It is becoming clear that training an LLM to exhibit a certain bias is ...

Does ChatGPT Have Cybersecurity Tells?

Poker players and other human lie detectors look for “tells,” that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A ...

OpenAI Transparency Report Highlights How GPT-4 Can be Used to Aid Both Sides of the Cybersecurity Battle

The nature of an advanced artificial intelligence (AI) engine such as ChatGPT provides its users with an ability to use and misuse, potentially empowering both security teams and threat ...

Guarding Against AI-Enabled Social Engineering: Lessons from a Data Scientist's Experiment

The Verge came out with an article that got my attention. As artificial intelligence continues to advance at an unprecedented pace, the potential for its misuse in the realm of ...

[Head Start] Effective Methods How To Teach Social Engineering To An AI

Remember The Sims? Well Stanford created a small virtual world with 25 ChatGPT-powered "people". The simulation ran for 2 days and showed that AI-powered bots can interact in a very ...

Large Language Models Will Change How ChatGPT and Other AI Tools Revolutionize Email Scams

The use of Large Language Models (LLMs) is the fine tuning AI engines like ChatGPT need to focus the scam email output to only effective content that results in a wave of new email scams.

Recent Artificial Intelligence Hype is Used for Phishbait

Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person’s excitement about the newest AI systems not yet available to the general ...

Win The AI Wars To Enhance Security And Decrease Cyber Risk

With all the overwrought hype with ChatGPT and AI…much of it earned…you could be forgiven for thinking that only the bad actors are going to be using these advanced technologies and the ...

Italy Bans ChatGPT: A Portent of the Future, Balancing the Pros and Cons

In a groundbreaking move, Italy has imposed a ban on the widely popular AI tool ChatGPT. This decision comes in the wake of concerns over possible misinformation, biases and the ethical ...

Social Engineering Attacks Utilizing Generative AI Increase by 135%

New insights from cybersecurity artificial intelligence (AI) company Darktrace shows a 135% increase in novel social engineering attacks from Generative AI.


Get the latest insights, trends and security news. Subscribe to CyberheistNews.