Europe takes the lead in global rush to regulate AI

Applications of AI


Amazing developments in artificial intelligence have amazed users with their ability to compose music, create images, and write essays, but have also raised concerns about their impact. Even European Union officials, who are working to develop groundbreaking rules to govern emerging technologies, were caught off guard by AI’s rapid rise.

Two years ago, the 27-country bloc proposed the Western world’s first AI rule focused on curbing risky yet narrow-scope applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on the AI ​​bill considered whether to include AI, but were unsure how or even needed it.

“Then a kind of boom in ChatGPT exploded,” said Dragos Tudrache, a Romanian member of the European Parliament who co-led the bill. “If there were still people who had doubts about whether they really needed anything, I think those doubts would quickly disappear.”

Also read: Looking for a smartphone? To check the mobile finder

Launched last year, ChatGPT captured the world’s attention with its ability to scan vast amounts of online material and generate human-like responses based on what it learns. Amid concerns, European lawmakers have moved quickly to finalize legislation in recent weeks to add language on AI systems in general.

The EU’s AI law could become the de facto global standard for artificial intelligence, and the size of the EU’s single market makes it easier for companies and organizations to comply than to develop different products for each region. may decide that.

“Europe is the first region to attempt to regulate AI on a significant scale,” said Sarah Chandor, senior policy adviser at digital rights group EDRi. ‘ said.

Authorities around the world are scrambling to find ways to take control of rapidly evolving technologies to ensure they improve people’s lives without jeopardizing their rights and safety. Regulators are concerned about new ethical and social risks posed by ChatGPT and other general-purpose AI systems that could upend everyday life, from work and education to copyright and privacy.

The White House recently convened heads of tech companies working on AI, including Microsoft, Google and ChatGPT developer OpenAI, to discuss the risks, warning that the Federal Trade Commission would crack down.

China has released draft regulations mandating security evaluations for products that use generative AI systems like ChatGPT. Britain’s competition watchdog has begun an overhaul of the AI ​​market, and Italy has temporarily banned ChatGPT, citing privacy violations.

A comprehensive EU regulation covering all providers of AI services and products was approved by the European Parliamentary Committee on Thursday and will then enter negotiations between the 27 member states, parliaments and the EU Executive Committee.

European rules affecting the rest of the world, the so-called Brussels effect, were rolled out after the EU tightened data privacy and mandated common phone charging cables, but such efforts have been criticized for stifling innovation. rice field.

Attitudes may be different this time. Tech leaders, including Elon Musk and Apple co-founder Steve Wozniak, have called for a six-month moratorium to consider the risks.

Computer scientist Jeffrey Hinton, known as the “Godfather of AI,” and fellow AI pioneer Joshua Bengio last week expressed concern about uncontrolled AI development.

Tudratche said such warnings showed that the EU’s move to start drafting AI rules in 2021 was “the right decision”.

Google is working on ChatGPT with its own Bard chatbot and rolling out AI tools, but declined to comment. The company told the EU that “AI is too important to be regulated”.

Microsoft, an OpenAI backer, did not respond to a request for comment. It welcomes the EU’s efforts as an important step towards “making trustworthy AI the norm in Europe and around the world.”

OpenAI chief technology officer Mira Murati said in an interview last month that she believes governments should be involved in regulating AI technology.

But when asked if some of OpenAI’s tools should be classified as posing a higher risk in the context of proposed European rules, he said it was “very nuanced.” rice field.

“It depends a bit on where you apply the technology,” she said, citing “very high-risk medical and legal applications,” accounting and advertising as examples.

OpenAI CEO Sam Altman will be on a world tour this month, stopping in Brussels and other European cities to talk to users and developers about the technology.

According to a recent partial draft of the bill obtained by The Associated Press, a recently added provision to the EU’s AI law would require “underlying” AI models to disclose copyrighted material used to train the system. .

Foundational models, also known as large scale language models, are a subcategory of general-purpose AI that includes systems such as ChatGPT. Their algorithms are trained on vast online information pools such as blog posts, digital books, scientific articles and pop songs.

“Major efforts must be made to document the copyrighted material used to train the algorithms,” said Tudrash, which paves the way for artists, writers and other content creators to seek redress. said Mr.

Officials formulating AI regulations must balance the risks the technology poses against the transformative benefits it promises.

EDRi’s Chandar said big tech companies developing AI systems and European ministries looking to deploy them were “trying to limit the reach of regulators,” while civil society groups stepped up accountability. said he was looking for

“We want more information about how these systems are being developed and the levels of environmental and economic resources that are being put into them, but how can we effectively counter them?” We also want to know where and how these systems are being used,” she said.

Under the EU’s risk-based approach, the use of AI to threaten people’s safety and rights is subject to strict regulation.

Remote facial recognition is expected to be banned. So does the government’s “social scoring” system, which judges people based on their behavior. Indiscriminate “scraping” of internet photos used for biometric matching or facial recognition is also prohibited.

Beyond therapeutic and medical applications, predictive policing and emotion recognition technologies are also emerging.

Non-compliance can result in fines of up to 6% of a company’s global annual revenue.

Even if final approval is obtained by the end of the year or early 2024 at the latest, the AI ​​law will not go into effect immediately. There will be a grace period for businesses and organizations to decide how to adopt the new rules.

Frederico Oliveira da Silva, senior legal director of the European consumer group BEUC, said the final version of the AI ​​law went further than the original proposals, arguing that the industry could demand more time. said there is.

They might argue, “We need two to three years, not one and a half to two years,” he said.

He pointed out that ChatGPT launched just six months ago, but it already brought many problems and benefits at that time.

“What will happen in the next four years?” Da Silva said, if AI laws were not fully enforced for years. “That’s our real concern, and that’s why we’re asking the authorities to address this issue by focusing on this technology seriously.” (AP) AMS AMS



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *