- Cybersecurity experts are warning against excessive information sharing by AI chatbots like ChatGPT and Google's Gemini.
- Chat data can be used to train generative AI and also makes personal data searchable.
- Many companies lack AI policies, leading to employees unknowingly putting sensitive information at risk.
This essay is based on a conversation with Sebastian Gierlinger, vice president of engineering at Storyblok, a 240-person content management system company based in Austria. It has been edited for length and clarity.
I'm a security expert and VP of Engineering at a content management system company whose clients include Netflix, Tesla, and Adidas.
While we believe that artificial intelligence and its latest developments will benefit work processes, the new capabilities of these generative AI chatbots also require more attention and awareness.
Here are four things to keep in mind when interacting with AI chatbots like OpenAI's ChatGPT, Google's Gemini, Anthropic's Claude, and Perplexity AI.
An analogy with social media usage:
The important thing to remember when using these chatbots is that the conversation isn’t just between you and the AI.
I myself use ChatGPT and similar large language models for holiday suggestions and input prompts such as “Where is a nice sunny place in May with clean beaches and temperatures above 25 degrees?”
But being too detailed can be problematic: Companies can use these details to train their next models, and parts of my life could become searchable if someone asks a new system for details about me.
The same goes for sharing financial or net worth details with these law masters students. I have not seen this happen, but the worst thing would be for your personal information to be entered into a system and revealed in a search.
Models likely already exist that can calculate net worth based on where you live, what industry you work in, and detailed information about your parents and lifestyle. Perhaps that would be enough to calculate your net worth and determine whether you might be an easy target for fraud, for example.
If you're unsure about what details to share, ask yourself if you would post it on Facebook – if the answer is no, then don't upload it to your LLM.
Follow your company's AI guidelines
As the use of AI in the workplace becomes more common for tasks like coding and analytics, it's important to follow your company's AI policies.
For example, my company has a list of sensitive items that are not allowed to be uploaded to the chatbot or LLM, including information like salaries, employee information, financial performance, etc.
We do this because we don't want someone to type in a prompt like “What is Storyblok's business strategy” and ChatGPT to show us “Story Block is currently working on 10 new opportunities for companies 1, 2, 3, and 4 and expects $X, Y, Z of revenue in the next quarter” – this is a huge problem for us.
When it comes to coding, there is a policy that AIs like Microsoft's Copilot are not responsible for any code, and all code generated by AI must be checked by a human developer before being saved to the repository.
Using your LLM judiciously in the workplace
In fact, roughly 75% of companies still don't have an AI policy. Many employers don't even have enterprise AI subscriptions and simply tell employees, “You're not allowed to use AI at work.”
But people being people, they will end up using AI on their personal accounts.
This is when it becomes important to be careful about what you enter into your LLM.
Previously, there was no real reason to upload company data to a random website. But now, for example, a finance or consulting employee who wants to analyze a budget can easily upload company or client numbers to a platform like ChatGPT and ask questions. They could be leaking sensitive data without even realizing it.
Differentiating Chatbots
It’s also important to differentiate between AI chatbots, as not all of them are built the same.
When I use ChatGPT, I trust that OpenAI and everyone involved in their supply chain will do their best to ensure cyber security and that my data will not be leaked to bad actors. At this point, I trust OpenAI.
In my opinion, the most dangerous AI chatbots are the home-grown ones: you can find them on airline or doctor websites, and they may not invest in security updates.
For example, if a doctor incorporates a chatbot into their website to do initial triage, users may start entering very personal health data that, if leaked, could allow others to learn about their illnesses.
As AI chatbots become more human-like, we are more likely to share more information and open up to topics we would not have discussed before. As a general rule of thumb, I would advise people not to blindly use every chatbot they come across and to avoid being too specific, regardless of which LLM they are speaking to.
Do you work in technology or cybersecurity and have a story to share about your experience with AI? Contact this reporter: shubhangigoel@insider.com.
