Security Awareness Training Blog

Artificial Intelligence Blog

Opinion and analysis on the latest developments in artificial intelligence as they relate to cybersecurity and how stay protected.

Forrester: AI, Cloud Computing, and Geopolitics are Emerging Cyberthreats in 2023

Wouldn’t it be great if your cybersecurity strategy only had to focus on just a few threats? Sigh… if only life were that easy. But new predictions for this year’s most prevalent cyber ...
Continue Reading

AI Voice-Based Scams Rise as One-Third of Victims Can’t Tell if the Voice is Real or Not

As audio deepfake technology continues to go mainstream as part of the evolution in AI-based tools, new data shows there are plenty of victims and they aren’t prepared for such an attack.
Continue Reading

[EPIC AI FAIL] Lawyer cites fake cases invented by ChatGPT

Found this highly amusing article: Legal Twitter is having tremendous fun right now reviewing the latest documents from the case Mata v. Avianca, Inc. (1:22-cv-01461). Here’s a neat ...
Continue Reading

AI-generated Disinformation Dipped The Markets Yesterday

The Insider reported that an apparently AI-generated photo faking an explosion near the Pentagon in D.C. went viral. The Arlington Police Department confirmed that the image and ...
Continue Reading

[Feet on the Ground] Stepping Carefully When Making an AI Your BFF

Bloomberg's Brad Stone wrote an op-ed covering this topic. In the past month, a chatbot called "My AI" or "Sage" has appeared as a new friend for several hundred million Snapchat users. ...
Continue Reading

Heart of the Matter: How LLMs Can Show Political Bias in Their Outputs

Wired just published an interesting story about political bias that can show up in LLM's due to their training. It is becoming clear that training an LLM to exhibit a certain bias is ...
Continue Reading

Does ChatGPT Have Cybersecurity Tells?

Poker players and other human lie detectors look for “tells,” that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A ...
Continue Reading

OpenAI Transparency Report Highlights How GPT-4 Can be Used to Aid Both Sides of the Cybersecurity Battle

The nature of an advanced artificial intelligence (AI) engine such as ChatGPT provides its users with an ability to use and misuse, potentially empowering both security teams and threat ...
Continue Reading

Guarding Against AI-Enabled Social Engineering: Lessons from a Data Scientist's Experiment

The Verge came out with an article that got my attention. As artificial intelligence continues to advance at an unprecedented pace, the potential for its misuse in the realm of ...
Continue Reading

[Head Start] Effective Methods How To Teach Social Engineering To An AI

Remember The Sims? Well Stanford created a small virtual world with 25 ChatGPT-powered "people". The simulation ran for 2 days and showed that AI-powered bots can interact in a very ...
Continue Reading

Large Language Models Will Change How ChatGPT and Other AI Tools Revolutionize Email Scams

The use of Large Language Models (LLMs) is the fine tuning AI engines like ChatGPT need to focus the scam email output to only effective content that results in a wave of new email scams.
Continue Reading

Recent Artificial Intelligence Hype is Used for Phishbait

Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person’s excitement about the newest AI systems not yet available to the general ...
Continue Reading

Win The AI Wars To Enhance Security And Decrease Cyber Risk

With all the overwrought hype with ChatGPT and AI…much of it earned…you could be forgiven for thinking that only the bad actors are going to be using these advanced technologies and the ...
Continue Reading

Italy Bans ChatGPT: A Portent of the Future, Balancing the Pros and Cons

In a groundbreaking move, Italy has imposed a ban on the widely popular AI tool ChatGPT. This decision comes in the wake of concerns over possible misinformation, biases and the ethical ...
Continue Reading

Social Engineering Attacks Utilizing Generative AI Increase by 135%

New insights from cybersecurity artificial intelligence (AI) company Darktrace shows a 135% increase in novel social engineering attacks from Generative AI.
Continue Reading

Fake ChatGPT Scam Turns into a Fraudulent Money-Making Scheme

Using the lure of ChatGPT’s AI as a means to find new ways to make money, scammers trick victims using a phishing-turned-vishing attack that eventually takes victim’s money.
Continue Reading

Artificial Intelligence Makes Phishing Text More Plausible

Cybersecurity experts continue to warn that advanced chatbots like ChatGPT are making it easier for cybercriminals to craft phishing emails with pristine spelling and grammar, The ...
Continue Reading

Identifying AI-Enabled Phishing

Users need to adapt to an evolving threat landscape in which attackers can use AI tools like ChatGPT to craft extremely convincing phishing emails, according to Matthew Tyson at CSO.
Continue Reading

Will AI and Deepfakes Weaken Biometric MFA

You should use phishing-resistant multi-factor authentication (MFA) when you can to protect valuable data and systems. But most biometrics and MFA are not as strong as touted and much of ...
Continue Reading

Hackers Work Around ChatGPT Malicious Content Restrictions to Create Phishing Email Content

Active discussions in hacker forums on the dark web showcase how using a mixture of the Open AI API and automated bot from the Telegram messenger platform can create malicious emails.
Continue Reading

Get the latest about social engineering

Subscribe to CyberheistNews