EU parliament strikes a fine line against prohibited practices – EURACTIV.com

Applications of AI


Members of the European Parliament shut down some key pieces of AI regulation at a political meeting on Thursday (April 13), but banned uses of AI could potentially split the House of Commons. I have.

The AI ​​Act is a landmark piece of legislation regulating artificial intelligence based on its ability to do harm, and MEPs have reached a political agreement on file in a key committee vote scheduled for April 26. is approaching, but adoption at the plenary session will be difficult.

The most politically sensitive part discussed at Thursday’s political meeting with all groups was the prohibited practice, whose application was deemed to pose unacceptable risks. Governance is largely resolved.

Prohibited act

German liberals ban “use of AI systems for general monitoring, detection and interpretation of private content in interpersonal communication services, including all means of undermining end-to-end encryption” proposed the introduction of regulations.

Proposed text confirmed by EURACTIV excludes AI-powered tools to detect suspected illegal content as required under EU proposal to combat child sexual abuse material It is intended that

But the conservative European People’s Party (EPP), a faction that is liberal in law enforcement, opposes the provision.

In exchange for getting rid of this, more progressive lawmakers will ban emotion recognition technology in law enforcement, border control, workplaces, and educational institutions, except for AI used for medical or research purposes with subject consent. will be

Also part of the deal is to allow and forbid real-time biometrics. afterHowever, EPP’s Shadow agrees to the arrangement, but is unsure if the group’s leadership will not call for a major vote during the plenary.

A major vote means that an alternative amendment will be submitted, and if it does not pass, the group must vote against the text as a whole.

Political debates among group leaders seek to neutralize scenarios in which the largest political group does not support the text, weakening parliament’s bargaining position, according to parliamentary sources.

high risk category

The AI ​​Act introduces a strict regime for high-risk systems, the list of areas and use cases of which are identified in Annex III. Although there was no time to discuss Annex III, this part was largely agreed.

In the original proposal, high-risk classification was done automatically, but MEPs should not allow AI models covered by Annex III to We have introduced an additional layer of high risk only in

If an AI provider determines that its system does not pose a significant risk, it may, on reasonable notice, notify a competent national authority or, if it operates in multiple European countries, the EU AI office. must be notified.

National authorities have three months to challenge the classification, and AI providers have the right to challenge. However, during this period the AI ​​provider was still able to bring AI solutions to his EU market after the notification was sent.

A key question is whether the authorities can or must reply to each notification. The Greens are concerned that the backlog will not filter out systems that are dangerous for too many authorities.

According to the compromise, the commission, in consultation with the AI ​​Office and relevant stakeholders, will be tasked with developing guidelines specifying the criteria by which companies should conduct such self-assessments six months before the regulation takes effect. owes

Transport or digital network security components covered by sectoral legislation are excluded from Annex III.

In education, AI applications for assessing appropriate education levels and detecting cheating during exams have been added to the high-risk list.

With respect to AI in the workplace, Annex III specifies AI systems intended for the recruitment of human resources, the placement of particularly targeted job advertisements, the filtering of applicants and the evaluation of candidates.

Governance and enforcement

Co-Rapporteur Dragosh Tudrash initially pushed for centralized enforcement via the AI ​​Office.

However, the AI ​​Office’s role was significantly reduced to a support role due to budget constraints, although it retained its own secretariat and executive director.

Investigative powers were largely in the hands of state authorities. The European Commission only needs to step up in the most serious cases, namely when national authorities ban systems that comply with AI regulations but pose serious risks.

If more than one Member State is affected, the primary authority will be where the infringement occurred. However, if the violation is widespread or at least affects 45 million people in two countries, a joint investigation is envisaged.

Widespread infringements concern at least three EU Member States. If it affects the collective interests of EU member states or at least two-thirds of the EU population, it gains a European dimension.

what remains

In addition to Annex III, legislators should finalize provisions on general-purpose AI, the AI ​​value chain, stand-alone articles, obligations of users, and preambles to the rule. Therefore, the April 26th Committee vote may still be delayed.

[Edited by Alice Taylor]





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *