KnowBe4 Security Awareness Training Blog

Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears

Written by Stu Sjouwerman | Mar 8, 2023 11:12:43 AM

Robert Lemos at DARKReading just reported on a worrying trend. The title said it all, and the news is that more than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information. Yikes.

"Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service.

In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential information, client data, source code, or regulated information to the LLM. 

In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company.

And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven.

'There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps," he says. "And how that plays out [remains to be seen] — I think, we're in pregame; we're not even in the first inning.'"

Your employees need to be stepped through new-school security awareness training so that they understand the risks of doing things like this.