Microsoft and OpenAI Team Up to Block Threat Actor Access to AI



Microsoft and OpenAIAnalysis of emerging threats in the age of AI provides insight into exactly how cybercriminals are leveraging AI to advance their efforts.

When ChatGPT first came out, there were some rudimentary security policies to avoid it being misused for cybercriminal activity. But threat actors quickly found ways around the policies and continued to use it for malicious purposes.

According to new published research by both Microsoft and OpenAI, the two companies have joined forces to detect, terminate and block malicious access to services provided by OpenAI. Some examples of how specific threat groups were misusing OpenAI include:

  • Charcoal Typhoon researched various companies and cybersecurity tools, debugged code, generated scripts and created content for use in phishing campaigns.
  • Salmon Typhoon translated technical papers, retrieved publicly available information on intelligence agencies, built malicious code and researched common ways processes could be hidden on a system.
  • Crimson Sandstorm obtained scripting support related to app and web development, generated content for spear-phishing campaigns, and researched common ways malware could evade detection.
  • Emerald Sleet identified experts and organizations focused on cyber defense in the Asia-Pacific region, gathered detail on publicly available vulnerabilities, obtained help with basic scripting tasks, and drafted content that could be used in phishing campaigns.

According to the research, the access was terminated and new safety protocols were adopted to help stop these types of access.

What this detail does show is that the misuse of AI is no longer a hypothetical; I’ve already pointed to research that establishes a high likelihood that phishing content is being written by AI. And now with the research from Microsoft and OpenAI, we can conclude that these same services are indeed being used to make cyberattacks more sophisticated and successful.

You’ll note that in many of the examples provided above by OpenAI, writing phishing content is a consistent theme. So, stepping up your organization's ability to spot malicious phishing emails is going to be critical moving forward; users need to be educated via new-school security awareness training to be vigilant, be skeptical, and be the last line of defense in phishing attacks.

KnowBe4 empowers your workforce to make smarter security decisions every day. Over 65,000 organizations worldwide trust the KnowBe4 platform to strengthen their security culture and reduce human risk.


Request A Demo: Security Awareness Training

products-KB4SAT6-2-1New-school Security Awareness Training is critical to enabling you and your IT staff to connect with users and help them make the right security decisions all of the time. This isn't a one and done deal, continuous training and simulated phishing are both needed to mobilize users as your last line of defense. Request your one-on-one demo of KnowBe4's security awareness training and simulated phishing platform and see how easy it can be!

Request a Demo!

PS: Don't like to click on redirected buttons? Cut & Paste this link in your browser:

https://www.knowbe4.com/kmsat-security-awareness-training-demo



Subscribe to Our Blog


Comprehensive Anti-Phishing Guide




Get the latest about social engineering

Subscribe to CyberheistNews