European Parliament tweaks text ahead of main committee vote – EURACTIV.com

Applications of AI


EU lawmakers are finalizing the wording of the AI ​​rule ahead of a vote in key parliamentary committees on Thursday (May 11).

The AI ​​Act is a landmark legislative proposal to regulate artificial intelligence based on its potential to cause harm. Members of the European Parliament (MEP) leading the file shared a tweaked version of the compromise fix on Friday (May 5).

The compromises confirmed by EURACTIV reflect a broader political agreement reached at the end of April, but also include last-minute changes and important details about how the deal operated.

MEP Signs Agreement on Artificial Intelligence Law

After months of intense negotiations, members of the European Parliament (MEP) have closed their differences and reached an interim political agreement on the world’s first artificial intelligence rulebook.

foundation model

The original proposals for the AI ​​Act did not cover AI systems with no specific purpose. The blazing success of ChatGPT and other generative AI models has confused the debate and led lawmakers to do more research on how best to regulate such systems.

The deal was found to impose a stricter regime on the so-called Foundation Model, a powerful AI system that can power other AI applications.

With regard to generative AI in particular, MEPs agreed that an overview of training data subject to copyright law should be provided. The tweaked text indicates that this summary should be “sufficiently detailed”.

Additionally, the generative underlying model should ensure transparency that its content is AI rather than human-generated.

The fine for foundation model providers who violate AI rules is set at €10 million or 2% of annual turnover, whichever is higher.

high risk system

The AI ​​Act establishes a strict regime for AI solutions that pose a high risk of causing harm. Originally, the proposal automatically categorized as high risk any system that fell under certain critical areas or use cases listed in Appendix III.

However, EU MPs have added an ‘additional layer’. This means that classification is not automated. Also, for a system to be considered high risk, it must pose a “significant risk.”

A new paragraph was introduced to better define what is meant by significant risk, stating, “On the one hand, in terms of the combined level of severity, intensity, probability of occurrence and duration of such It should be evaluated taking into account the impact of risks.” On the one hand, whether the risk could affect an individual, a group of people, or a specific group of people.”

There were also some last-minute changes to Annex III. MEPs have agreed to include very large online platform recommendation systems as a high-risk category under the Digital Services Act. The latest security breach limits this high-risk category to social media.

AI systems used to influence the outcome of voting behavior are considered high risk. Still, an exception was introduced for AI models whose output is not directly visible to the general public, such as tools for organizing political campaigns.

New requirements have been added for these system requirements, mandating that high-risk AI systems comply with accessibility requirements.

In terms of transparency, the text states that “If deployers use high-risk AI systems to assist in decision-making or to make related decisions, those affected may You should always be informed that you are subject to the use of to a natural person’.

At the request of the centre-left, the parliamentary text includes an obligation for those deploying high-risk systems in the EU to conduct a fundamental rights impact assessment. This impact assessment includes consultation with competent authorities and relevant stakeholders.

A new addition to the text exempts SMEs from this consultation provision.

Prohibited act

AI law prohibits applications deemed to pose unacceptable risks. Progressive legislators have obtained expanded bans on both real-time and post-use biometric systems, with the exception of the latter in cases of serious crimes and pre-judicial authorization.

The biometric ban is difficult to understand for the centre-right European People’s Party, which has strong factions that support law enforcement. It won a split vote to vote to ban biometrics, apart from the compromise proposal.

Additionally, the ban on biometric classification introduced carve-outs for therapeutic purposes.

Governance and enforcement

The MEP presented a diagram of the AI ​​Office, a new EU institution that supports the harmonized application of AI rulebooks and cross-border investigations.

Added language referring to potential enhancements to the office in the future to better support cross-border enforcement. The reference is to upgrade to an agency, a solution not allowed by the current EU budget.

In a last-minute tweak, EU lawmakers have empowered national authorities to request access to both pre-trained and pre-trained models of AI systems, including underlying models. Access may be onsite or, in exceptional circumstances, remotely.

In addition, the document refers to a proposal to add a national authority professional secrecy clause taken from the EU General Data Protection Regulation.

review

The list of factors that the European Commission should consider when evaluating AI legislation includes sustainability requirements, the foundation model legal regime, and the unfair unilateral imposition of general AI providers on SMEs and start-ups. extended to contract terms.

[Edited by Nathalie Weatherald]

Read more on EURACTIV





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *