How Europe is building artificial intelligence guardrails

AI News


A European Parliament committee is set to vote on the proposed rules as part of a multi-year effort to develop guardrails for artificial intelligence. ChatGPT’s rapid progress has made these efforts all the more urgent, as it highlights the benefits that emerging technology brings, as well as the new dangers it poses.

Here is the EU artificial intelligence law:

How do rules work?

The AI ​​law, first proposed in 2021, will govern any product or service that uses artificial intelligence systems. The law classifies AI systems according to four risk levels, ranging from minimal to unacceptable. High-risk applications will face tougher requirements, such as increased transparency and use of accurate data. Think of it as an “AI risk management system,” says Johan Raks, an expert at the Oxford Internet Institute.

What are the risks?

One of the EU’s main goals is to defend against AI threats to health and safety and protect fundamental rights and values.

This means that some AI uses, such as “social scoring” systems that judge people based on their behavior, and interactive talking toys that encourage risky behavior, are absolutely prohibited.

Predictive police tools that process data to predict where crimes will occur and who will commit them will be banned. So is remote facial recognition, with some narrow exceptions such as preventing certain terrorist threats. The technology scans passers-by and uses AI to match their faces against a database. Thursday’s vote is set to decide how broad the scope of the ban will be.

Italian MP Brando Benifei, who is leading the European Parliament’s AI effort, told reporters on Wednesday that the aim is to “avoid an AI-based controlled society.” “I think these technologies can be used for good as well as bad, and I think the risks are too high,” he said.

AI systems used in high-risk categories that impact human lives, such as employment and education, face stringent requirements such as providing transparency to users, implementing risk assessments and mitigation measures.

The EU enforcement branch says most AI systems, such as video games and spam filters, fall into the low-risk or no-risk category.

How about chat PT?

The original 108-page proposal barely mentioned chatbots, merely requiring them to be labeled to indicate that a user was interacting with a machine. Negotiators later added a clause covering general-purpose AI like ChatGPT, imposing some of the same requirements as high-risk systems.

One important addition is the requirement to thoroughly document copyrighted material used to teach AI systems how to generate text, images, video, or music that resembles human works. . This lets content creators know if their blog posts, digital books, scientific articles, and pop songs are being used to train the algorithms that power systems like ChatGPT. They can then determine whether their work has been copied and seek redress.

Why are EU regulations so important?

The European Union is not a big player in cutting-edge AI development. That role will be played by the United States and China. However, Brussels often plays a trendsetter role when it comes to regulations that tend to become de facto global standards.

“Europeans are fairly wealthy and numerous in the world”, so companies and organizations differ from region to region due to the sheer size of the regional single market of 450 million consumers. Low says he often finds compliance easier than developing a product. He said.

But it’s not just about policing. By setting common rules on AI, the city of Brussels is trying to instill trust in users and develop the market, Raucks said.

“The idea behind it is that if people can trust AI and applications, they will use AI more,” said Laux. “And the more we use AI, the more it will unlock its economic and social potential.”

What happens if I break the rules?

Violations can result in fines of up to €30 million ($33 million) or 6% of a company’s annual global revenues, and for technology companies such as Google and Microsoft, the fines can reach billions of dollars. be.

what’s next?

It could be years before the rule is fully enforced. The bill will go to a joint vote by the European Parliamentary Committee on Thursday. The bill would then move to tripartite negotiations involving the bloc’s 27 member states, parliaments and executive committees, facing further disputes over the details. Final approval is expected by the end of this year, or early 2024 at the latest, followed by a grace period (often around two years) for companies and organizations to adapt.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *