Garante, the Italian data protection agency, announced at the end of March that ChatGPT, a chatbot that uses artificial intelligence (AI) to generate text that looks like it was made by humans, and a computer game. Watchdogs were less concerned with the use of AI (simulation of human intelligence by computer systems) than with violations of data protection laws.
Garante then told OpenAI, the company behind Microsoft Corp-backed ChatGPT, that it needs to be more transparent with its users about how their data is handled. The US company also said that if it uses user data to support further development of its software, i.e. learning, it must obtain permission from users and filter access to minors. increase. In a press release, Italian authorities said the ban will be lifted if OpenAI meets these conditions by his April 30th.
An OpenAI spokesperson told Reuters that it was “pleasant” that Garante was “reconsidering” the original ban and that “we are working with them to make ChatGPT available to Italian customers soon.” I look forward to doing so,” he said.
EU-wide regulation on AI
Spain and France have also expressed similar concerns about ChatGPT. There are currently no EU-wide regulations on the use of AI in products such as self-driving cars, medical technology and surveillance systems. The European Parliament is still debating legislation proposed by the European Commission two years ago. If approved, it will probably come into force in early 2025, as EU Member States themselves will have to agree.
But Germany’s MEP Axel Voss, one of the leading authors of the EU’s artificial intelligence law, said AI wasn’t making much progress two years ago and is likely to make more progress in the next two years. Did. It is no longer relevant when the law is actually enforced.
It is not clear if ChatGPT and similar products will be subject to EU regulation. EU regulations define risk levels for AI ranging from “unacceptable” to “minimal or no risk”. As the law currently stands, only programs assigned a “high risk” or “limited risk” score are subject to special rules regarding algorithm documentation, transparency, and data usage disclosure. Applications that record and evaluate people’s social behavior in order to predict certain behaviors will be banned, as will government social scoring and certain facial recognition technologies.
Legislators are still debating how much AI will be allowed to record or simulate emotions, and how to assign risk categories.
“For competitive reasons, and because we are already lagging behind, we actually need to be more optimistic to deal with AI more intensively,” Voss said. What’s happening is that most people are driven by fear and concern to try and exclude all.” He added that it would make sense to amend existing data protection laws.
Balancing consumer protection and economics
The European Commission and Parliament are trying to strike a balance between consumer protection, regulation and the free development of the economy and research. After all, as EU Internal Market Commissioner Thierry Breton has pointed out, AI offers “tremendous potential” in digital societies and economies. Two years ago, when Brock’s AI law was presented, he said the EU didn’t want to get rid of the developers of his AI, but wanted to promote them and persuade them to settle in Europe. . He added that the EU should not rely on foreign providers and AI’s data should be stored and processed within his EU.
Mark Brakel of the US-based nonprofit Future of Life Institute told DW that companies must also be held accountable by regulators. He said the risk level he applies to AI applications is not enough. He said developers themselves need to monitor the risk of individual applications and should take steps to ensure that “companies are obliged to manage this risk and publish the results.” Some companies cannot predict today what their AI products will be able to do tomorrow, and the results may surprise them, he added.
“If it’s too complicated here, companies will go elsewhere to develop algorithms and systems,” warned MEP Voss. “Then they will come back and use us only as a consumer nation, so to speak.”
The feature of ChatGPT, which has become a hot topic in Europe, is that it was developed in the United States and can be used globally. OpenAI may soon face stiff competition from other US companies, including Google and Elon Musk’s Twitter. Chinese tech giants are also joining the race, with Baidu already creating a chatbot called Ernie.
So far, no European chatbots seem to have appeared.
This article was translated from German.
