Microsoft and OpenAI Team Up to Block Threat Actor Access to AI

Stu Sjouwerman | Mar 5, 2024

Microsoft and OpenAIAnalysis of emerging threats in the age of AI provides insight into exactly how cybercriminals are leveraging AI to advance their efforts.

When ChatGPT first came out, there were some rudimentary security policies to avoid it being misused for cybercriminal activity. But threat actors quickly found ways around the policies and continued to use it for malicious purposes.

According to new published research by both Microsoft and OpenAI, the two companies have joined forces to detect, terminate and block malicious access to services provided by OpenAI. Some examples of how specific threat groups were misusing OpenAI include:

  • Charcoal Typhoon researched various companies and cybersecurity tools, debugged code, generated scripts and created content for use in phishing campaigns.
  • Salmon Typhoon translated technical papers, retrieved publicly available information on intelligence agencies, built malicious code and researched common ways processes could be hidden on a system.
  • Crimson Sandstorm obtained scripting support related to app and web development, generated content for spear-phishing campaigns, and researched common ways malware could evade detection.
  • Emerald Sleet identified experts and organizations focused on cyber defense in the Asia-Pacific region, gathered detail on publicly available vulnerabilities, obtained help with basic scripting tasks, and drafted content that could be used in phishing campaigns.

According to the research, the access was terminated and new safety protocols were adopted to help stop these types of access.

What this detail does show is that the misuse of AI is no longer a hypothetical; I’ve already pointed to research that establishes a high likelihood that phishing content is being written by AI. And now with the research from Microsoft and OpenAI, we can conclude that these same services are indeed being used to make cyberattacks more sophisticated and successful.

You’ll note that in many of the examples provided above by OpenAI, writing phishing content is a consistent theme. So, stepping up your organization's ability to spot malicious phishing emails is going to be critical moving forward; users need to be educated via new-school security awareness training to be vigilant, be skeptical, and be the last line of defense in phishing attacks.

KnowBe4 empowers your workforce to make smarter security decisions every day. Over 65,000 organizations worldwide trust the KnowBe4 platform to strengthen their security culture and reduce human risk.

See KnowBe4 Security Awareness Training in Action

See how you can efficiently safeguard your organization from sophisticated social engineering threats.

Request a Demo

Secure the Digital Workforce: Human + AI

KnowBe4 empowers the modern workforce to make smarter security decisions every day. Trusted by more than 70,000 organizations worldwide, KnowBe4 is the pioneer of digital workforce security, securing both AI agents and humans. The KnowBe4 Platform provides attack simulation and training, collaboration security, and agent security powered by AIDA (Artificial Intelligence Defense Agents) and a proprietary Risk Score. The platform leverages 15 years of behavioral data to combat advanced threats including social engineering, prompt injection, and shadow AI. By securing humans and agents, KnowBe4 leads the industry in workforce trust and defense.