As the use of Cloud SaaS platforms of generative AI solutions increases, the likelihood of more “GPT” attacks used to gather credentials, payment info and corporate data also increases.
In Netskope’s Cloud and Threat Report 2024, they show a massive growth in the use of generative AI solutions – from just above 2% of enterprise users prior to 2023 to over 10% in November of last year. Mainstream AI services ChatGPT, Grammarly, and Google Bard all top the list of those used.
But it’s that 400%-ish growth that has me worried.
I’ve already covered how AI services have the potential for business purposes to leak the data used as part of the query to someone other than the original user. It was shown that ChatGPT did this back in June of last year (and I’m thinking this issue has been addressed), but it doesn’t mean that the next big generative AI solution won’t have the same problem.
I’ve also talked about cybercriminals pretending to operate the “next big thing” in generative AI to get unwitting victims to “sign up” and provide their credit card.
What worries me is how the growth in use of generative AI solutions is going to trigger a larger increase in both these kinds of scams, making it tougher for organizations to keep themselves and their data safe.
Since users are on the frontline of finding and using legitimate services, it’s imperative that ongoing new-school security awareness training be a part of the organizations acceptable use policy, so that users can be kept up to date on such scams the moment they appear.
KnowBe4 enables your workforce to make smarter security decisions every day. Over 65,000 organizations worldwide trust the KnowBe4 platform to strengthen their security culture and reduce human risk.