This told essay is based on a conversation with Harsh Varshney, a 31-year-old who works at Google and lives in New York. The following has been edited for length and clarity.
AI is rapidly becoming a silent partner in our daily lives, and life without AI tools is unimaginable.
Every day, they help with detailed research, note-taking, coding, online searches, and more.
However, my job means that I am very aware of the privacy concerns that come with the use of AI. I've been working at Google since 2023, where I spent two years as a software engineer on the Privacy team building the infrastructure that protects user data. He then moved to the Chrome AI Security team, where he helps protect Google Chrome from malicious threats such as hackers and attackers who use AI agents to run phishing campaigns.
AI models use data to generate useful responses. As users, we need to protect our personal information so that malicious organizations, such as cybercriminals and data brokers, cannot access it.
Here are four habits I've created that I believe are essential to protecting your data when using AI.
Treat AI like a public postcard
In some cases, a false sense of intimacy with AI can lead us to share information online that we would never otherwise do. AI companies may do the following However, we do not recommend sharing your credit card details, social security number, home address, personal medical history, or other personally identifiable information with an AI chatbot.
Depending on the version used, Information shared publicly AI chatbots allow you to train future models to generate more relevant responses. This can result in something like ”In “Training Leak”, the model remembers one user's personal information. And later spit it back out in response to something else. Additionally, there is a risk of a data breach where what you share with the chatbot is compromised.
Handles AI chatbots Something like a public postcard. If you don't write information on a postcard for everyone to see, you won't share that information with public AI tools. I'm not sure how my data can be used for future training.
Know which “room” you are in
It's important to identify whether you're using a more general AI tool or an enterprise-grade AI tool.
While it's unclear how conversations will be used to train public AI models, companies can: pay “Enterprise” model. Here, the model is usually not intended to be trained on user conversations, so it is safer for employees to talk about their work or company projects.
Think of it like having an overheard conversation in a crowded coffee shop versus a confidential office meeting where you stay inside the room.
There have been reports of employees accidentally leaking company data to ChatGPT. If you're working on an unannounced corporate project or trying to obtain a patent, you probably don't want to discuss your plans with a non-enterprise-grade chatbot because of the risk of leakage.
I don't discuss projects I work on at Google with public chatbots. Instead, use the enterprise model even for small tasks like editing work emails. Since my conversations aren't used for training, I'm much more comfortable sharing information about myself, but I still keep the personal information I share to a minimum.
Delete history regularly
AI chatbots typically save conversation history, but to protect user privacy in the long term, we recommend that you periodically delete conversation history in both enterprise and public models. Even if you're sure you're not entering any personal data into the tool, it's a good idea to have proactive habits, as you risk having your account compromised.
I was once surprised when a company's Gemini chatbot gave me my exact address even though I didn't remember sharing it. Turns out, I had previously asked for this tool to narrow down emails that contained my address. The tool has long-term memory and can remember information from previous conversations, allowing it to identify and retain my address.
Sometimes, when searching for something you don't want your chatbot to remember, you use a special mode similar to incognito mode. In this mode, the bot will not save my history or use that information to train the model. ChatGPT and Gemini call this the “temporary chat” feature.
Use popular AI tools
We recommend using AI tools that are well-known and likely have clear privacy frameworks and other guardrails in place.
Outside of Google's products, I like OpenAI's ChatGPT and Anthropic's Claude.
It may also be helpful to check the privacy policy of the tool you are using. In some cases, we may provide details about how the data is used to train the model. In the privacy settings, you can also look for a section with the option to “Improve the model for everyone.” You can prevent your conversations from being used for training by making sure this setting is turned off.
AI technology is incredibly powerful, but we must be careful when using it to ensure that our data and identities are safe.
Do you have a story to share about using AI to help your work? Contact this reporter at: ccheong@businessinsider.com
