LONDON (AP) — The breathtaking development of artificial intelligence has lured users into composing music, creating images and writing essays, and has also raised concerns about its impact. Even EU officials working on groundbreaking rules to govern emerging technologies have been caught off guard by the rapid rise of AI.
The 27-nation bloc proposed the first AI rule in the Western world two years ago, focused on curbing risky but focused applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on AI legislation have considered whether to include them, but weren’t sure how or even needed it.
“Then a boom like ChatGPT exploded,” said Dragos Tudorache, a Romanian member of the European Parliament who co-led the bill. “If there were still people wondering if they needed anything, I think that doubt disappeared quickly.”
Launched last year, ChatGPT caught the world’s attention for its ability to generate human-like responses based on what it learns by scanning vast amounts of online material. As concerns have surfaced, European lawmakers have moved swiftly in recent weeks to add language to popular AI systems as they put the finishing touches on the law.
The EU’s AI law could become the de facto global standard for artificial intelligence, and businesses and organizations would be better off complying than developing different products for different regions due to the size of the single market in the block. You may find it easy.
Political cartoons about world leaders

political cartoons

“Europe is the first regional bloc to try to regulate AI significantly. This is a big challenge given the wide range of systems that the broad definition of ‘AI’ can cover,” said Senior Policy Advisor at Digital Rights Group EDRi. Says one Sarah Chandor.
Authorities around the world are scrambling to find ways to control rapidly evolving technology so that it can improve people’s lives without jeopardizing their rights and safety. Regulators are concerned about new ethical and social risks posed by ChatGPT and other general-purpose AI systems that could upend everyday life, from work and education to copyright and privacy.
The White House recently invited the heads of tech companies working on AI, including Microsoft, Google and OpenAI, creators of ChatGPT, to discuss the risks, but warned that the Federal Trade Commission would not hesitate to crack down. .
A comprehensive EU regulation targeting providers of AI services or products is expected to be approved by the European Parliamentary Committee on Thursday, after which the 27 member states, parliaments and the EU Executive Committee will Let’s go to negotiations.
Computer scientist Geoffrey Hinton, known as the “godfather of AI,” and AI pioneer Yoshua Bengio last week expressed concern about unidentified AI developments.
Tudorache said such warnings show that the EU’s move to start drafting AI rules in 2021 was “the right decision”.
Google, which supports ChatGPT with its own Bard chatbot and deploys AI tools, declined to comment. The company has told his EU that “AI is too important to regulate.”
Microsoft, an OpenAI backer, did not respond to a request for comment. We welcome the EU’s efforts as an important step “to make trustworthy AI the norm in Europe and around the world.”
OpenAI Chief Technology Officer Mira Murati said in an interview last month that she believes governments should get involved in regulating AI technology.
However, it was asked whether some of OpenAI’s tools should be classified as posing a higher risk in the context of the proposed European regulations.
“It depends on where you apply the technology,” she said, citing “very high-risk medical use cases or legal use cases,” as well as accounting or advertising applications.
OpenAI CEO Sam Altman is planning a world tour this month, stopping in Brussels and other European cities to talk to users and developers about the technology.
A recently added provision to the EU’s AI law would require “basic” AI models to disclose copyrighted material used to train the system, according to a partial draft of a recent bill obtained by The Associated Press.
Foundational models, also known as large scale language models, are a subcategory of general-purpose AI that includes systems such as ChatGPT. Their algorithms are trained on a vast pool of online information such as blog posts, e-books, scientific articles, and pop songs.
“We must go to great lengths to document the copyrighted material we use in training our algorithms,” it said, paving the way for artists, writers and other content creators to seek redress. said Tudorache.
EDRi’s Chandar said that while major technology companies developing AI systems and European national ministries looking to deploy them are “trying to limit the reach of regulators,” civil society groups are becoming more It says it calls for a lot of accountability.
“We need more information on how these systems are being developed – the level of environmental and economic resources that are put into them – but how and where are these systems being used? We also need to be able to challenge effectively,” she said.
Under the EU’s risk-based approach, the use of AI to threaten people’s safety and rights faces strict controls.
Remote facial recognition will be banned. So are government “social scoring” systems that judge people based on their behavior. Indiscriminate “scraping” of photos from the Internet used for biometric matching or facial recognition is also prohibited.
Beyond therapeutic and medical applications, predictive policing and emotion recognition technologies have also been announced.
Non-compliance can result in fines of up to 6% of a company’s global annual revenue.
Even after receiving final approval by the end of the year or early 2024 at the latest, the AI Act will not come into effect immediately. There is a grace period for businesses and organizations to figure out how to adopt the new rules.
Frederico Oliveira da Silva, senior legal director of the European consumer group BEUC, said the industry is giving more time to the law by arguing that the final version of the AI law goes further than it was originally proposed. may request.
They can argue that “we need two to three years instead of one and a half to two years.”
He said ChatGPT was only launched six months ago and has already seen many problems and benefits in that time.
“What will happen in the next four years,” da Silva said, if the AI law is not fully enforced for years. “That’s really a concern for us, and we’re asking the authorities to be on top of it to really focus on this technology.”
Contributed by Matt O’Brien, AP Technology Writer in Providence, Rhode Island.
Copyright 2023 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
