The question is no longer “What can I do with ChatGPT?” It’s “What should I share?”
Internet users are generally aware of the risks of potential data breaches and how their personal information is used online. However, ChatGPT’s seductive features seem to create a blind spot for the dangers we usually try to avoid. OpenAI just recently announced a new privacy feature that allows ChatGPT users to disable their chat history and prevent their conversations from being used to improve and refine their models.
Related item: ChatGPT rolls out important privacy options
“This is a step in the right direction,” said Nader Henein, vice president of privacy research at Gartner, who has 20 years of experience in enterprise cybersecurity and data protection. “But the fundamental problem with privacy and AI is that once the model is built, there is little we can do about retroactive governance.”
Henein says think of ChatGPT as a friendly stranger sitting behind you on a bus recording you with a camera phone. “They have very soft voices and seem nice. So why not go and have the same conversation? Because that’s how it is.” But if it hurts you, it’s like a sociopath and they won’t think twice about it.”
Even OpenAI CEO Sam Altman acknowledges the risks of relying on ChatGPT. “It would be a mistake to rely on it for anything important now. We have a lot of work to do in terms of robustness and veracity,” he said. tweeted In December 2022.
Basically, treat ChatGPT prompts like any other prompt you publish online. Fletcher-Jones Professor Gary Smith said, “There is an assumption that anything you put on the Internet – email, social media, blogs, LLMs – can be read by anyone in the world. Never post anything you don’t want.”in Economics from Pomona College and is the author of Mistrust: big data, data torture, and attacks on science. ChatGPT can be used as an alternative to Google search and Wikipedia as long as it’s fact-checked, he said. But you shouldn’t rely too much on anything else.
The bottom line is that it’s still risky and made even more volatile by the allure of ChatGPT. Whether you use ChatGPT in your personal life or to improve your productivity at work, take this as a kind reminder to think twice about what you share on ChatGPT. please.
Understand the risks of using ChatGPT
First, let’s take a look at what OpenAI tells users about how to use their data. Not everyone has the same privacy priorities, but it’s important to know the details for the next time you open ChatGPT.
1. Possibility of Hackers Breaking into Super Popular Apps
First and foremost, someone outside of OpenAI can hack in and steal your data. There is always the risk of data leaks due to bugs or hackers when using third party services. ChatGPT is no exception. In March 2023, it was discovered that a ChatGPT bug leaked titles, first messages of new conversations, and payment information from ChatGPT Plus users.
“All of this information you’re shoving is very problematic because it could well be susceptible to machine learning attacks. That’s number one,” Henein said. “Second, it’s probably stored in plaintext somewhere in the logs. I don’t know if anyone will see it, nor do you. That’s the problem.”
2. Conversations are stored somewhere on the server
While unlikely, certain OpenAI employees have access to User Content. OpenAI says on its ChatGPT FAQ page that user content is stored in its own systems and other “systems of trusted service providers in the United States.” As such, OpenAI removes identifiable personal information, which exists in its raw form on our servers before it is “anonymized”. Some of her certified OpenAI personnel have access to user content for four distinct reasons. One of them is to “tweak” the model unless the user opts out.
Related item: Beware of dubious copycat ChatGPT apps
3. Your conversations are used to train the model (unless you opt out)
You’ll opt out later, but unless you do, your conversations will be used to train ChatGPT. According to a data usage policy scattered across multiple articles on the site, OpenAI “may use the data provided to improve its models.” On a separate page, OpenAI says it may “aggregate or anonymize personal information and use the aggregated information to analyze the effectiveness of our services.” This means, in theory, that anything a model “learns” could make the public aware of things like trade secrets.
Previously, users could only opt out of data sharing with models through a Google Form linked to the FAQ page. OpenAI has introduced a more explicit way to disable data sharing in the form of a toggle setting within your ChatGPT account. However, even with this new “incognito mode,” your conversations will still be stored on OpenAI’s servers for 30 days. However, the company says little about how it keeps your data safe.
4. YOUR DATA won’t Sold to third parties, company says
OpenAI says it won’t share user data with third parties for marketing or advertising purposes, so it’s one less thing to worry about. However, we do share User Data with vendors and service providers for site maintenance and operation.
What happens when you use ChatGPT at work?
ChatGPT and generative AI tools are touted as the ultimate productivity hack. ChatGPT can draft articles, emails, social media posts, and long text summaries. “As far as I can think of, there is no example that hasn’t been done yet,” Henein said.
But when a Samsung employee used ChatGPT to check the code, he inadvertently leaked a trade secret. The electronics company then banned him from using ChatGPT and threatened disciplinary action if his employees did not comply with the new restrictions. Financial institutions such as JP Morgan, Bank of America, and Citigroup have also banned or restricted the use of ChatGPT, citing strict financial regulations on third-party messaging. Apple has also banned the use of chatbots by employees.
The temptation to reduce mundane tasks to seconds seems to mask the fact that users are essentially publishing this information online. “Think of Excel like you think of a calculator,” he said. “We don’t expect this information to go to the cloud and reside permanently in logs somewhere or in the model itself.”
So if you use ChatGPT at work to figure out concepts you don’t understand, make copies, or analyze publicly available data, be cautious if there are no rules against it. Proceed, please. But if, for example, you’re asked to evaluate the code for a top-secret missile guidance system you’re developing, or you’re asked to outline your boss’s meeting with a corporate spy embedded in a competitor, it’s not enough. Please be careful. This could result in job loss or worse.
What happens when you use ChatGPT as your therapist?
A study conducted by health tech company Tebra revealed that one in four Americans would be more likely to converse with an AI chatbot than attend therapy. ChatGPT form of treatmentor seeking help with substance abuse. These examples were shared as interesting use cases of how ChatGPT can be a useful, non-judgmental, and anonymous conversational partner. But your deepest, darkest confessions are stored somewhere on the server.
People tend to think of their ChatGPT sessions as “walled gardens,” Henein said. “When you finally log out, anything in it will be gone.” [session] End the conversation by flushing it down the toilet. But it’s not. ”
If you are an internet person, your personal data is already scattered all over the place. But it’s not ChatGPT’s conversational medium where you might feel compelled to reveal your intimate personal thoughts. “LLM is an illusion. A powerful illusion, but still an illusion reminiscent of Joseph Weizenbaum’s Eliza computer program in the 1960s,” Smith said.
Smith refers to the “Eliza effect,” the human tendency to anthropomorphize inanimate objects. “The user knew that he was interacting with a computer-his program, but was convinced that the program had human-like intelligence and emotions, and was able to understand his deepest emotions and closest secrets.” was delighted to share.”
So, given how OpenAI stores your conversations, unless you’re ready to put your deepest thoughts out into the world, it’s an illusion that OpenAI is a mental health wizard. Don’t let yourself fall into a trap and blurt out your innermost thoughts.
How ChatGPT protects your data
There is a way to go incognito mode when using ChatGPT. This means that conversations are stored for 30 days, but not used for model training. Go to your account name and open settings,[データ コントロール]Click. From here, you can toggle “Chat history and training” off.[一般],[すべてのチャットをクリア]You can also clear past conversations by clicking
Go to your settings page and disable chat history. Credits: Open AI
