EU lawmakers propose regulation of generative AI, among other key changes to upcoming AI law

AI For Business


On Thursday, May 11, the European Parliament’s Committee on the Internal Market and Civil Liberties voted in favor of broad amendments to the EU’s Artificial Intelligence Regulation (the “AI Law”). The Commission’s proposal seeks to further develop a framework for regulating the risks associated with AI, which has dominated the political debate in this area for several years, while at the same time regulating generative AI and other forms of general-purpose AI. It also aims to address new and important concerns related to (the “underlying model”).

The proposed amendments also expand the scope of obligations to so-called “users” of AI systems, and impose “reliability requirements” that apply to any AI system once implemented, regardless of whether it is considered “high risk”. establish. no. EU lawmakers’ comprehensive and courageous amendments deserve respect, but some may simply not go in the right direction. Supporting the competitiveness of EU research institutions and businesses requires careful consideration. The EU cannot afford to lose any further in the fast-paced global collaboration of AI innovation.

next step

The revised AI law draft proposed by the relevant committees will be voted on and approved by the plenary session of the European Parliament scheduled to meet in June 2023. Given the nature and scope of the proposed changes, there will likely be intensive tripartite negotiations between the European Council, the European Commission and Parliament to reach agreement on a final version. . A successful resolution is likely by the end of 2023, resulting in the world’s first AI-specific comprehensive regulation coming into force in early 2024.

Main fixes

The Congressional proposal has been particularly controversial and introduces a number of amendments that will likely dominate future debate. (2) new reliability requirements for all AI systems; (3) expanded obligations applicable to users of AI systems; change the scope of classified AI systems; and (5) add to the list of prohibited AI practices.

1. Basic model and generative AI

The proposed amendment introduces new rules to the underlying model, an AI model trained on a very wide range of sources and large amounts of data, for versatility for different applications. Such underlying models typically serve as the basis for a wide range of downstream tasks and can be made available to such specific dependent applications through “open source” or application program interfaces (APIs). A particularly visible and practically ubiquitous form of underlying model is what we call “generative AI,” where models are designed to generate all kinds of content: text, code, images, animations, videos, music, etc. I’m here.

The proposed clause would impose certain obligations on providers of such underlying models to:

  • Ensure robust protection of fundamental rights, health and safety, the environment, democracy and the rule of law.
  • Assess and mitigate risks associated with the model.
  • Introduce practical measures in model design and development to ensure that specific criteria are met.
  • Register your model in the newly introduced EU database.

The proposed clause imposes additional, broader transparency requirements on the “generative” underlying model.

  • Disclose that AI generated the content.
  • Design your model so that illegal content is not generated.
  • Publish a summary of copyrighted data used for training.and
  • Support innovation and protect the rights of citizens

These obligations and requirements for underlying models in general, and generative AI in particular, are waived for research activities and AI components provided under open source licenses.

2. Basic Model and Problems of Proposed Rules for Generative AI

Regulating the Foundation model in general, and generative AI in particular, is very necessary and makes a lot of sense. However, such regulation is by no means trivial and here are some challenges that EU legislators may face in future discussions.

  • Does “” really make sense for general-purpose AI that can serve any form of application?Guarantees strong protection” of “human rights, health, safety, democracy, environment, rule of law”? All these aspects are clearly noteworthy, but their specific content is very extensive and difficult to define. It might make more sense to stick with the previous approach, which is to address all these questions at the level of the application rather than the model from which it was generated. A more subtle approach may help to avoid suffocating the .
  • Requests for underlying model risk assessment/mitigation strategies may not be feasible in a practical manner. Again, the question is whether a model that serves as a platform for downstream applications is suitable for such duties. This can be particularly important as many risks only appear and emerge during the AI ​​system training phase. This usually happens at the application level.
  • Only large companies may have the resources to comply with this broader mandate, so the burden placed on underlying models by the proposed clause will disproportionately disadvantage small and medium-sized AI businesses. There is a possibility.
  • Requiring organizations that use generative AI to publish a summary of the copyrighted content they use may be entirely unfeasible. In addition to the sheer scale of the work itself, the exemptions provided by EU copyright law for mining copyrighted content can get in the way.
  • Requests to disclose that certain content was generated by AI may also not work. First, users may simply not comply, which leads to a predictable level of non-compliance, rendering any law useless. Aiming for maximum transparency is a good goal, but more care needs to be taken to make the mandate viable.
  • The requirement to design models in a way that prevents illegal content is well-intentioned. But legislators need to consider whether it makes more sense to align these obligations with similar obligations that already exist in the form of “content moderation obligations” under the EU Digital Services Act. deaf. The phenomenon is very similar, if not the same. The provided platform can be abused by third party applications. A coherent approach to such forms of misuse may make sense, may indeed be necessary given the overlap and relevance, and the particular politics that arise from it. A coherent approach may be required when social and social hazards are also considered.

3. AI General Principles

Previous versions of the AI ​​law by the European Commission and European Council focused primarily on introducing obligations related to “high-risk” use cases for AI. However, Congressional amendments propose to significantly expand the scope of regulation by introducing a set of general principles for the development and use of AI. These principles are intended to apply to all AI systems, regardless of the risks they pose. Organizations are expected to use their best efforts to develop and use AI systems in accordance with the following requirements:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • transparency
  • Diversity, non-discrimination and fairness
  • social and environmental well-being

4. Additional User Obligations

While providers (i.e. developers) of high-risk AI systems are subject to key obligations under the AI ​​Act, Congressional amendments would also expand the scope of requirements applicable to organizations implementing these systems. Suggested. These organizations, called “users” in past versions of the AI ​​law, are now called “adopters.”

These additional requirements include, for example:

  • Conduct a detailed impact assessment that considers the risks that AI systems pose to the fundamental rights of individuals and the associated mitigation measures that will be implemented.
  • Introduction of an AI governance system. Respond to compliant response and remediation procedures.
  • Implementation of human oversight for relevant AI systems.
  • Providing algorithmic transparency information to end-users.

5. High Risk AI Systems

Comprehensive regulation of specific risks stemming from so-called “high-risk AI systems” is the main focus and purpose of the AI ​​law. Whether a particular AI is considered “high risk” depends on its specific scope, and each use case is explicitly listed in the AI ​​Act. AI systems used in areas such as medical devices, automobiles, education assessment, job recruitment, credit assessment, critical infrastructure and health insurance have already been identified as “high risk” in previous drafts.

New to the list of these “high-risk AI systems” are applications aimed at “influencing voters in political campaigns.” This addition is clearly necessary and makes sense from any point of view. AI (including all sorts of data analysis techniques) will obviously be used in the context of elections, but these uses deserve particularly close scrutiny by regulation. At present, few AI applications seem to require such useful regulation.

6. Prohibited AI Conduct

art. Article 5 of the previous AI bill covered a number of AI practices deemed overly intrusive, discriminatory, or abusive (before the latest amendments to broader bans were filed). was already defined in ). These prohibitions include, among others: (1) applying a form of “social scoring”; (2) exploiting individual vulnerabilities; and (3) discriminating against people on the basis of gender, race, age, etc. or unfairly classify, or practice AI to do (4). Real-time biometric authentication in public accessible spaces.

These existing bans have been the subject of intense criticism from many human rights groups and have been the subject of multiple efforts to expand legal protections for all forms of intrusive AI conduct. . These efforts have been successful, with the latest amendments by the EU parliamentary committee being significantly more restrictive. The list of prohibited things that has been added or significantly expanded now also includes the following systems:

  • Retroactive (non-real-time) remote biometrics – only with the permission of law enforcement authorities, except law enforcement for prosecution of serious crimes.
  • Predictive profiling for law enforcement purposes (based on objective cluster criteria such as location/movement, behavior, past criminal activity).
  • Emotional recognition (i.e., analysis of human behavior such as facial expressions, body language, gestures, and tone of voice to assess emotional state) in law enforcement, border control, workplaces, and educational institutions.and
  • Collecting biometric data indiscriminately from social media or surveillance camera footage to create a facial recognition database.

Some of these amendments are expected to meet stiff resistance from some EU Member States, which will exercise their voice through the EU Council in the upcoming court negotiations. The actual questions that may arise are very delicate and difficult. Is it acceptable to harness the power of AI to examine publicly available information and build new databases to help identify potential criminals? In the context of criminal investigations. , would strong restrictions on retrospective analysis of public space footage be wise? Indeed, these and related issues will continue to spark intense debate in legislatures and society at large.

We will continue to update you on the progress of this and other AI-related legislation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *