As organizations continue to believe the malicious use of artificial intelligence (AI) will outpace its defensive use, new data focused on the future of AI in cyber attacks and defenses should leave you very worried.
It all started with the proposed misuse of ChatGPT to write better emails and has (currently) evolved into purpose-built generative AI tools to build malicious emails. Or worse, to create anything an attacker would need using a simple prompt. We know it’s a problem, but what are organizations truly concerned about when it comes to AI’s misuse?
According to Flashpoint’s recently-released Artificial Intelligence in Cybersecurity report, malicious AI use is a major concern:
- 62% believe offensive (read: malicious) AI use will outpace its defense use
- 71% are concerned about “rogue AI” (an autonomous AI-based application that “behaves dangerously”)
To get a bit more practical, the top two cyber attack types that are believed to become more dangerous due to AI are:
- Phishing attacks (by 54% of organizations)
- Social Engineering attacks (by 53% of organizations)
With 76% of organizations believing that the world is close to experiencing an adversarial AI that can “evade most known cybersecurity fences,” organizations need to take precautions immediately. Just over two-thirds (68%) of organizations believe that an increase in security awareness training is necessary and the number one step to prepare for sophisticated or overwhelming AI attacks.
KnowBe4 enables your workforce to make smarter security decisions every day. Over 65,000 organizations worldwide trust the KnowBe4 platform to strengthen their security culture and reduce human risk.