- Italy last week became the first western country to ban ChatGPT, a popular AI chatbot.
- ChatGPT has impressed researchers with its capabilities while worrying regulators and ethicists about its negative impact on society.
- The move highlights the lack of specific regulation, with the EU and China among the few jurisdictions developing rules tailored to AI.
- Various governments are considering how to regulate AI, and some are considering how to deal with general-purpose systems such as ChatGPT.
This photo illustration shows the ChatGPT logo in its Washington, DC office on March 15, 2023.
Stephanie Reynolds | AFP | Getty Images
Italy has become the first country in the West to ban ChatGPT, a popular artificial intelligence chatbot from US startup OpenAI.
Last week, Italy’s data protection watchdog ordered OpenAI to temporarily stop processing the data of Italian users amid an investigation into alleged violations of Europe’s strict privacy regulations.
The regulator, also known as Garante, cited a data breach at OpenAI, allowing users to view the titles of conversations other users had in chatbots.
“There appears to be no legal basis for collecting and processing large amounts of personal data to ‘train’ the algorithms the platform relies on,” Galante said in a statement Friday.
Garante also expressed concerns about ChatGPT’s lack of age restrictions and how chatbots could provide misleading information in responses.
Microsoft-backed OpenAI risks facing a fine of €20 million ($21.8 million), or 4% of annual global turnover, if it does not remedy the situation within 20 days.
Italy is not the only country taking into account the rapid advances in AI and its impact on society. Other governments are coming up with their own rules for AI, and you’ll no doubt touch on them whether or not they mention generative AI. Generative AI refers to a set of AI technologies that generate new content based on user prompts. This is more advanced than previous AI iterations thanks to new large-scale language models trained on massive amounts of data.
We have long called for AI to face regulation. However, the speed at which technology advances is proving difficult for governments to keep up. Computers can now create realistic art, write entire essays, and generate a few lines of code.
Sophie Hackford, a futurist and global innovation advisor at John Deere, a farm equipment maker, told CNBC’s “Squawk.” Box Europe ‘Monday.
“Technology is here to help us. It’s about making cancer diagnoses faster and not having to do things people don’t want to do.”
“We need to think very carefully about it now and from a regulatory standpoint, we need to act now,” she added.
Various regulators are concerned about the challenges AI poses to job security, data privacy and equality. There is also concern that advanced AI will generate false information to manipulate political discourse.
Many governments are also beginning to think about how to deal with general-purpose systems such as ChatGPT, with some considering joining Italy in banning the technology.
Last week, the UK announced plans to regulate AI. Instead of establishing new regulations, the government asked regulators in various fields to apply existing regulations to AI.
The UK proposal does not mention ChatGPT by name, but it does mention some key principles that companies should follow when using AI in their products, including safety, transparency, fairness, accountability and contestability. is outlined.
The UK is not proposing any restrictions on ChatGPT or other types of AI at this stage. Instead, he wants to ensure that companies develop and use AI tools responsibly and provide users with sufficient information about how and why certain decisions are made.
In a speech to parliament last Wednesday, Digital Minister Michelle Donnellan said the sudden popularity of generative AI showed that the risks and opportunities surrounding the technology were “emerging at a staggering pace”. said.
By taking a non-statutory approach, governments can “respond quickly to advances in AI and intervene further if necessary,” she added.
Dan Holmes, fraud prevention leader at Feedzai, which uses AI to fight financial crime, said the main priority of the UK approach was to address “what the appropriate use of AI looks like.” I said yes.
“And when you’re working with AI, these are principles to consider,” Holmes told CNBC. “It often boils down to two things: transparency and impartiality.”
Other parts of Europe are likely to adopt a much more restrictive stance on AI than their UK counterparts, which increasingly deviate from EU digital law, as the UK moves away from EU digital law. expected.
The European Union, which is often at the forefront when it comes to technical regulation, has proposed groundbreaking legislation on AI.
Known as the European AI Law, the regulation will significantly limit the use of AI in critical infrastructure, education, law enforcement and judicial systems.
This works in tandem with the EU General Data Protection Regulation. These rules regulate how companies process and store personal data.
When the AI law was first conceived, officials failed to describe the incredible advances in AI systems that can generate impressive art, stories, jokes, poems and songs.
According to Reuters, the draft EU rules consider ChatGPT to be a type of general-purpose AI used in high-risk applications. High-risk AI systems are defined by the commission as those that can affect people’s basic rights and safety.
They will face measures such as rigorous risk assessments and requirements to eradicate discrimination caused by algorithms that feed datasets.
“The EU has a lot of AI expertise and some of the best talent in the world. rice field. CNBC.
“It is worth trusting that they have the best of their member countries and are fully aware of the potential competitive advantage these technologies bring to the risks.”
But while Brussels finalizes legislation on AI, some EU countries are already considering Italy’s actions on ChatGPT and debating whether to follow suit.
“In principle, a similar procedure is possible in Germany,” Ulrich Kerber, Germany’s Federal Commissioner for Data Protection, told Handelsblatt.
French and Irish privacy regulators have reached out to Italian officials to confirm the details of their findings, Reuters reports. The Swedish data protection authority denied the ban. Italy can proceed with such actions because OpenAI does not have a single office within her EU.
Ireland is usually the most active regulator when it comes to data privacy, as most US tech giants such as Meta and Google have offices there.
The US has yet to propose formal rules to monitor AI technology.
The country’s National Institute of Science and Technology has released a national framework to provide guidance on managing risks and potential harm to companies using, designing or deploying AI systems.
But since it operates on a voluntary basis, businesses are not affected by non-compliance with the rules.
So far, there have been no reports of any action being taken to restrict ChatGPT in the US.
Last month, the Federal Trade Commission said from a nonprofit research group that OpenAI’s latest large-scale language model, GPT-4, is “biased, deceptive, and threatens privacy and public safety.” We received a complaint alleging that we violated the Federal Trade Commission’s AI guidelines.
This complaint could lead to a halt to OpenAI’s investigation and commercial deployment of its large-scale language models. The FTC declined to comment.
ChatGPT is not available in China or countries with strict internet censorship such as North Korea, Iran, and Russia. Although not officially blocked, OpenAI does not allow domestic users to sign up.
Several big tech companies in China are developing alternatives. China’s biggest tech companies Baidu, Alibaba and JD.com have announced plans for ChatGPT rivals.
China was keen to enable tech giants to develop products in line with strict regulations.
Last month, Beijing introduced for the first time such restrictions on so-called deepfakes, artificially generated or tampered images, videos, or texts created using AI.
Chinese regulators have previously introduced rules governing how companies operate their recommendation algorithms. One of the requirements is that companies must submit algorithmic details to cyberspace regulators.
Such restrictions could theoretically apply to any kind of ChatGPT-style technology.
– CNBC’s Arjun Kharpal contributed to this report