Europe to lead the way in a global rush to regulate AI

Applications of AI


LONDON (AP) — Breathtaking developments in artificial intelligence have captivated users with creating music, creating images and writing essays, and have also raised concerns about their impact.Even EU officials working on groundbreaking rules to govern emerging technologies have been caught off guard by AI’s rapid rise.

The 27-nation bloc proposed AI rules for the first time in the Western world Two years ago, the focus was on curbing risky but targeted applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on AI legislation have considered whether to include them, but weren’t sure how or even needed it.

“Then a boom like ChatGPT exploded,” said Dragos Tudorache, a Romanian member of the European Parliament who co-led the bill. “If there were still people wondering if they needed anything, I think that doubt disappeared quickly.”

Release of ChatGPT Last year, it captured the world’s attention with its ability to scan vast amounts of online material and generate human-like responses based on what it learns.concerns are emergingEuropean lawmakers have moved swiftly in recent weeks to add language to common AI systems as they put the finishing touches on legislation.

The EU’s AI law could become the de facto global standard for artificial intelligence, and businesses and organizations would be better off complying than developing different products for different regions due to the size of the single market in the block. You may find it easy.

“Europe is the first regional bloc to try to heavily regulate AI.This is a big challenge given the wide range of systems that the broad definition of ‘AI’ can cover,” said Sarah Chandor, senior policy adviser at digital rights group EDRi.

Authorities around the world are scrambling to find ways to control rapidly evolving technology so that it can improve people’s lives without jeopardizing their rights and safety.Regulators worry about new ethical and social risks Brought to you by ChatGPT and other general-purpose AI systems that could change your daily life from work and education to copyright and privacy.

The White House recently brought While the heads of tech companies working on AI, including Microsoft, Google and OpenAI, creators of ChatGPT, discuss the risks, the Federal Trade Commission warns it won’t hesitate to crack down..

China has issued a draft regulation mandating security evaluations for all products that use generative AI systems like ChatGPT.UK competition watchdog launches AI market reviewItaly temporarily bans ChatGPT in violation of privacy.

A comprehensive EU regulation targeting providers of AI services or products is expected to be approved by the European Parliamentary Committee on Thursday, after which the 27 member states, parliaments and the EU Executive Committee will Let’s go to negotiations.

After the EU tightened data privacy, European rules affecting the rest of the world, the so-called Brussels effect, were previously put into action. and obligatory common phone charging cablebut such efforts have been criticized as stifling innovation.

Your attitude may be different this time. Tech leaders such as Elon Musk and Apple co-founder Steve Wozniak have called for a six-month moratorium to consider the risks..

Computer scientist Geoffrey Hinton, known as the “godfather of AI,” and AI pioneer Yoshua Bengio have expressed concern. About unchecked AI development last week.

Tudorache said such warnings show that the EU’s move to start drafting AI rules in 2021 was “the right decision”.

Google Goes ChatGPT With Its Own Bard Chatbot It is deploying AI tools, but declined to comment. The company has told his EU that “AI is too important to regulate.”

Microsoft, an OpenAI backer, did not respond to the request for comment. We welcome the EU’s efforts as an important step “to make trustworthy AI the norm in Europe and around the world.”

Mira Murati, Chief Technology Officer at OpenAI, said in an interview last month: She believed the government needed to get involved in regulating AI technology.

But asked about some of OpenAI’s tools In the context of the proposed European regulations, it should be classified as posing a higher risk, which she said was “very sensitive.”

“It depends on where you apply the technology,” she said, citing “very high-risk medical use cases or legal use cases,” as well as accounting or advertising applications.

OpenAI CEO Sam Altman is planning a world tour this month, stopping in Brussels and other European cities to talk to users and developers about the technology.

A recently added provision to the EU’s AI law would require “basic” AI models to disclose copyrighted material used to train the system, according to a partial draft of a recent bill obtained by The Associated Press.

Foundational models, also known as large scale language models, are a subcategory of general-purpose AI that includes systems such as ChatGPT.Their algorithms are trained on vast pools of online informationblog posts, e-books, science articles, pop songs, and more.

“We must go to great lengths to document the copyrighted material we use in training our algorithms,” it said, paving the way for artists, writers and other content creators to seek redress. said Tudorache.

Officials writing AI regulations must balance the risks the technology poses With the revolutionary benefits it promises.

EDRi’s Chandar said that while major technology companies developing AI systems and European national ministries looking to deploy them are “trying to limit the reach of regulators,” civil society groups are becoming more It says it calls for a lot of accountability.

“We need more information on how these systems are being developed – the level of environmental and economic resources that are put into them – but how and where are these systems being used? We also need to be able to challenge effectively,” she said.

Under the EU’s risk-based approach, the use of AI to threaten people’s safety and rights faces strict controls.

remote face recognition expected to be banned.So is the government’s “social scoring” system. that judge people by their actions. Indiscriminate “scraping” of photos from the Internet used for biometric matching or facial recognition is also prohibited.

predictive policing Emotion recognition technology is emerging for non-therapeutic and medical applications.

Non-compliance can result in fines of up to 6% of a company’s global annual revenue.

Even after receiving final approval by the end of the year or early 2024 at the latest, the AI ​​Act will not come into effect immediately. There is a grace period for businesses and organizations to figure out how to adopt the new rules.

Frederico Oliveira da Silva, senior legal director of the European consumer group BEUC, said the industry is giving more time to the law by arguing that the final version of the AI ​​law goes further than it was originally proposed. may request.

They can argue that “we need two to three years instead of one and a half to two years.”

He said ChatGPT was only launched six months ago and has already seen many problems and benefits in that time.

“What will happen in the next four years,” da Silva said, if the AI ​​law is not fully enforced for years. “It’s really a concern for us, and we’re asking the authorities to be on top of it to really focus on this technology.”

___

Contributed by Matt O’Brien, AP Technology Writer in Providence, Rhode Island.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *