Europe leads the world in how to build guardrails around AI

AI Video & Visuals


LONDON (AP) — Authorities around the world rush to draft rules on artificial intelligenceIncluding the European Union, the draft legislation reached a crucial stage on Thursday.

A European Parliament committee has voted to strengthen a major bill for passage, as part of a long-running effort to get the city of Brussels to put up guardrails. for artificial intelligence. The rapid advances in chatbots like ChatGPT have made these efforts even more urgent as they highlight the benefits emerging technologies bring, and the new dangers they pose.

Here is the EU artificial intelligence law:

How do rules work?

The AI ​​law will be proposed for the first time in 2021, Manage any product or service that uses artificial intelligence systems. The law classifies AI systems according to four risk levels, ranging from minimal to unacceptable. High-risk applications will face tougher requirements, such as increased transparency and use of accurate data. Think of it as an “AI risk management system,” says Johan Raks, an expert at the Oxford Internet Institute.

What are the risks?

One of the EU’s main goals is to defend against AI threats to health and safety and protect fundamental rights and values.

This means that some AI uses, such as “social scoring” systems that judge people based on their behavior, are absolutely prohibited. AI that exploits vulnerable populations, including children, and AI that uses subconscious manipulation that can cause harm, such as interactive conversational toys that encourage dangerous behavior, is also prohibited.

Lawmakers bolstered the proposal by voting in favor of banning predictive police tools that process data to predict where crimes will occur and who will commit them. It also approved expanding the ban on remote facial recognition, with exceptions for some law enforcement agencies, such as the prevention of certain terrorist threats. The technology scans passers-by and uses AI to match their faces against a database.

Italian MP Brando Benifei, who is leading the European Parliament’s AI effort, told reporters on Wednesday that the aim is to “avoid an AI-based controlled society.” “I think these technologies can be used for good as well as bad, and I think the risks are too high,” he said.

AI systems used in high-risk categories that impact human lives, such as employment and education, face stringent requirements such as providing transparency to users, implementing risk assessments and mitigation measures.

The EU enforcement branch says most AI systems, such as video games and spam filters, fall into the low-risk or no-risk category.

How about chat PT?

The original 108-page proposal barely mentioned chatbots, merely requiring them to be labeled to indicate that a user was interacting with a machine. Negotiators later added a clause covering general-purpose AI like ChatGPT, imposing some of the same requirements as high-risk systems.

One important addition is the requirement to thoroughly document copyrighted material used to teach AI systems how to generate text, images, video, or music that resembles human works.. This lets content creators know if their blog posts, digital books, scientific articles, and pop songs are being used to train the algorithms that power systems like ChatGPT. They can then determine whether their work has been copied and seek redress.

Why are EU regulations so important?

The European Union is not a big player in cutting-edge AI development. That role will be played by the United States and China. However, Brussels often plays a trendsetter role when it comes to regulations that tend to become de facto global standards.

“Europeans are fairly wealthy and numerous in the world”, so companies and organizations differ from region to region due to the size of the regional single market of 450 million consumers. Low says he often finds compliance easier than developing a product. He said.

But it’s not just about policing. By setting common rules on AI, the city of Brussels is trying to instill trust in users and develop the market, Raucks said.

“The idea behind it is that if people can trust AI and applications, they will use AI more,” said Laux. “And the more we use AI, the more it will unlock its economic and social potential.”

What happens if I break the rules?

Violations can result in fines of up to €30 million ($33 million) or 6% of a company’s annual global revenues, and for technology companies such as Google and Microsoft, the fines can reach billions of dollars. be.

what’s next?

It could be years before the rule is fully enforced. European Union lawmakers are now scheduled to vote on the bill at a plenary session in mid-June. It will then move to tripartite negotiations involving the bloc’s 27 member states, parliaments and executive committees, but may face further changes as the details evolve. Final approval is expected by the end of this year, or early 2024 at the latest, followed by a grace period (often around two years) for companies and organizations to adapt.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *