The European Parliament’s main parliamentary committees voted on Thursday (May 11) to give the AI law the go-ahead, paving the way for its plenary adoption in mid-June.
The AI Act is the primary law for regulating artificial intelligence based on its potential to cause harm. On Thursday, the parliamentary civil liberties committee and the internal market committee jointly adopted the document with a majority.
The next step is plenary adoption, with a provisional date of 14 June. After the MEPs formally state their positions, the proposal will enter the final stages of the legislative process, initiating so-called tripartite negotiations with the EU Council and the European Commission.
Bland Benifay, one of the file’s co-rapporteurs, told colleagues before the vote: “We are building a bill that is truly groundbreaking for the digital landscape, not just for Europe, but for the whole world.” said.
Definition of AI
The definition of artificial intelligence is an important aspect of the law that defines its scope. Conservative MPs have agreed to align the definition with that of the Organization for Economic Co-operation and Development (OECD), a club of 38 rich countries.
“‘artificial intelligence system’ (AI system) means any machine-based system designed to operate with varying levels of It can generate outputs such as predictions, recommendations, and decisions that affect the dynamic environment.”
Notably, the OECD is already considering fine-tuning its definition of AI. Therefore, EU MPs have adjusted the language in anticipation of the organization’s future language.
Prohibited act
The AI Act prohibits certain applications, such as manipulation techniques and social scoring, that we believe pose unacceptable risks. The list of these prohibited acts has been greatly expanded by the arguments of left-to-centrist lawmakers.
This ban extends to AI models for biometric classification, predictive policing, and discarding facial images for database building. Emotion recognition software is prohibited in law enforcement, border control, workplaces, and education.
Biometrics were initially allowed in certain circumstances, such as kidnappings and terrorist attacks, but were more controversial. Parliament had a majority of the outright ban, even though the conservative European People’s Party opposed it until the last minute.
general-purpose AI
The original version of the AI Act did not include systems that did not have a specific purpose. The rapid success of ChatGPT and other large-scale language models has left EU lawmakers wondering how to best regulate this type of AI. As a result, a stepwise approach was adopted.
The AI Rulebook does not cover General Purpose AI (GPAI) systems by default. Much of the obligation falls on economic operators who integrate these systems into applications deemed high risk.
However, GPAI providers must support downstream operators’ compliance by providing all relevant information and documentation on AI models.
Stronger requirements are proposed for the underlying models, powerful general-purpose AI systems like Stable Diffusion that can enhance other AI applications. Obligations include risk management, data governance, and underlying model robustness levels, and are vetted by independent experts.
The top layer is for generative AI models like ChatGPT, which should disclose each time text is generated by the AI and provide a detailed overview of the training data subject to copyright law.
High-risk classification
The regulation introduces a stricter regime for high-risk AI applications. Initially, high risk was determined based on the list of critical areas and use cases based on Appendix III.
However, MEPs removed this automaticity and added that to be classified as high risk, an AI system must also pose a significant risk to people’s health, safety or fundamental rights. Added layers.
If an AI system falls under Annex III, but the provider determines that it does not pose a significant risk, it must notify the relevant authorities, who must file a three-month objection. In the meantime, the provider will be able to launch her AI solution, but will face penalties for misclassification.
Annex III has also been significantly revised to provide more accurate language on access to critical infrastructure, education, employment and essential services. The areas of law enforcement, immigration control and judicial administration expanded.
Added recommender system for social media platforms designated as very large online platforms under the Digital Services Act.
risky obligations
The European Parliament document made the obligations for high-risk AI providers more prescriptive, especially in risk management, data governance, technical documentation and record keeping.
An entirely new requirement has been introduced for users of high-risk AI solutions to undergo a fundamental rights impact assessment, taking into account aspects such as potential negative impacts on marginalized groups and the environment.
Governance and execution
There was a consensus among EU parliamentarians to ensure an element of centralization in the enforcement structure, especially for cross-border cases. Co-Rapporteur Dragos Tudrash proposed the creation of a new agency, the AI Office, which falls short of EU institutions.
During the negotiations, the AI Directorate’s mandate was significantly reduced as there was no more room for maneuver over the EU budget. As a result, the AI office only played a supporting role, such as guiding and coordinating joint investigations.
By contrast, the European Commission was tasked with resolving disputes between national authorities over dangerous AI systems.
[Edited by Alice Taylor]