Less than a month after banning ChatGPT over privacy concerns, the Italian government reversed its stance on OpenAI’s chatbot on Friday. The decision comes after the startup has addressed concerns raised about the privacy and security of user data on its artificial intelligence platform.
“ChatGPT is now available again for users in Italy,” said an OpenAI spokesperson. Decryption in an email statement. “We are thrilled to welcome them and remain committed to protecting their privacy.”
In late March, Italy joined several countries including Russia, China, North Korea, Cuba, Iran, and Syria in banning the use of ChatGPT within their borders.
Italy initially imposed the ban after reports that ChatGPT was collecting and storing user data without user consent. Such concerns have prompted other countries, including Canada, Germany, Sweden, and France, to launch their own investigations into the highly popular tool.
Earlier this month, Garante, the Italian data protection watchdog, gave OpenAI an olive branch.
The agency has requested that OpenAI implement an age limit, be clear about how it handles data, provide data management options, and allow users to opt out of using their data.
While not explicitly mentioning the situation in Italy, OpenAI rolled out a series of new features on Tuesday. This includes the ability for users to turn off chat history and opt out of the company using the data in their training models. .
“We have addressed or clarified the issues raised by Galante,” the spokesperson said. Decryptionciting the publication of an article on training data collection and use, and made opt-out forms and privacy policies more visible to users across the platform.
A spokesperson said the company will continue to respond to privacy requests via email, introduce a new form for EU users to object to the use of their data in model training, and offer an age verification tool for Italian users at signup. I added that I will implement it.
The machine learning giant also protects user data, allowing AI to detect unexpected, false, and unsubstantiated content, news, or people, events, or facts.
OpenAI said: