As concerns about artificial intelligence (AI) continue to grow around the world, the European Union (EU) provides the international community with a regulatory roadmap. On May 11, 2023, the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs and the Committee on Domestic Market and Consumer Protection passed approval for the Artificial Intelligence Act. As written, the law is a world first in its approach to AI legal risk management and is sure to be considered by the United States and other nations as AI evolves rapidly. prize. The legislation sets the course for the inevitable US-EU partnership, as transatlantic cooperation, regulatory oversight, appropriate industry standards, and promoting economic partnerships continue to be priorities. The AI law has been approved by members of the European Parliament, but additional steps need to be taken before the bill becomes law.
Basic principles of EU AI law
With the ever-emerging capabilities of AI in composing music, writing literature, and providing medical services, the proposed law would require human oversight, safety, transparency, traceability, non-discrimination, It sets out key principles to ensure environmental considerations. It aims to set a universal definition of AI that accommodates both existing and future AI systems and remains technology-neutral. In particular, the law proposes a risk-based approach to AI regulation, with obligations on AI systems correlated with the level of risk that AI systems can pose. The law includes provisions exempting research activities and AI components provided under open source licenses. The bill also advocates regulatory sandboxes. in short, a controlled environment established by a public authority to test AI prior to deployment. This approach aims to balance the protection of fundamental rights with the need for legal certainty in business and the stimulation of innovation in Europe.
Current and trending US approaches
In contrast, the U.S. Congress continues to keep its eye on AI, with more emphasis on funding research to uncover its capabilities and achievements. These efforts are driven in part by the desire to understand the breadth of AI and potentially alleviate concerns in the regulatory arena. Ultimately, advances in AI technology could serve as tools to mitigate some of the risks identified through the Act’s core principles. The notion of federalism in the United States contributes to the already thorny dilemma of regulatory enforcement with a patchwork system of inconsistent state laws that hope to be on the precipice of the next big technological revolution. In fact, various states have already proposed laws regulating the development and use of AI. For example, California proposed legislation (AB 331) to regulate the use of automated decision-making tools (including AI) and require developers and users of these AI tools to submit annual impact assessments.
Key Principles of EU AI Law
4 risk levels
AI applications are classified into four risk levels: unacceptable risk, high risk, limited risk, and minimal or zero risk. Applications that pose an unacceptable risk are prohibited by default and cannot be deployed within the EU. This includes changing behavior, exploiting individual or group vulnerabilities, classifying biometrics based on sensitive attributes, assessing social scoring or trustworthiness, predicting crime or administrative crime, targeting Includes AI systems that employ subliminal techniques or manipulation tactics to create or augment facial recognition databases through unthrottled scraping. Or guess your emotions in law enforcement, border control, the workplace, and education. In contrast, low-risk applications include systems deployed for product/inventory control or AI-enabled platforms such as video games. Similarly, limited-risk systems include chatbots or other her AI-based systems that meet the disclosure standards required to give users the option to speak instead of speaking to a human.
high risk use
The AI Act identifies the following uses as high risk:
- Biometric authentication and classification of natural persons: an AI system intended to be used for ‘real-time’ and ‘post’ remote biometric authentication of natural persons.
- Management and operation of critical infrastructure: AI systems intended for use as safety components in the management and operation of road traffic, water, gas, heating and electricity supplies.
- Education and vocational training: AI systems intended to be used for the purpose of determining and allocating access of natural persons to educational and vocational training institutions. AI systems intended to be used to evaluate students in educational and vocational institutions and to evaluate participants in tests commonly required for admission to educational institutions.
- Employment, Worker Management, Access to Self Employment: AI systems intended for use in recruiting or selecting natural persons, in particular for advertising vacancies, screening or filtering applications, or evaluating candidates in the course of interviews or examinations. AI is intended to be used to make decisions regarding the promotion and termination of work-related contractual relationships, assign tasks, and monitor and evaluate the performance and behavior of persons in such relationships.
- Access to and enjoyment of essential private and public services and benefits: By or on behalf of a public authority to assess a natural person’s eligibility for public assistance benefits and services and to grant, reduce, revoke or reclaim such benefits and services AI systems intended to be used. An AI system intended to be used to assess the creditworthiness of a natural person or establish a credit score. The exception is AI systems that small providers offer services for their own use. AI systems are intended to be used to dispatch emergency first responders, such as firefighters or medical aid, or to establish priorities for dispatch.
- Law Enforcement: Law enforcement for a variety of purposes, including risk assessment of individuals, detection of deepfakes, assessment of the reliability of evidence, prediction of the occurrence or recurrence of actual or potential criminal activity, profiling of natural persons, and the commission of criminal activity. AI system crime analysis intended to be used by.
- Management of immigration, asylum and border control: Intended for use by authorized public authorities for a variety of purposes, including detecting the emotional state of a natural person, assessing risks, verifying the authenticity of travel documents, and assisting in the examination of applications for asylum, visas and residence permits AI system.
- Management of judicial and democratic processes: AI systems are intended to help law enforcement investigate and interpret facts and law, and apply law to a set of concrete facts.
Prohibitions of “Social Scoring”
In the context of AI law, “social scoring” refers to the practice of evaluating individuals based on their social behavior and personality traits, often using a wide range of information sources. This approach is used to assess, categorize and score individuals and can affect many aspects of an individual’s life such as access to loans, mortgages and other services. The current draft includes a ban on social scoring by European public authorities. However, the European Economic and Social Commission (EESC) has expressed concern that the ban will not apply to private and semi-private organizations and may allow such organizations to use social scoring techniques. have expressed. The EESC calls for a complete ban on social scoring in the EU and the establishment of grievance and redress mechanisms for individuals harmed by AI systems.
Blurred Lines – Illegal Social Scoring and Proper Data Analysis
The EESC also argued that AI methods should seek to distinguish between what is considered social scoring and what is considered an acceptable form of assessment for specific purposes. They suggest that a line can be drawn when the information used in the assessment is not reasonably relevant or proportionate. The EESC also stresses the need for AI to augment, rather than replace, human decision-making and human intelligence, and criticizes the AI law for failing to articulate this view.
Foundation Large Language Model
A key aspect of the law concerns the regulation of “foundation models” such as OpenAI’s GPT and Google’s Bard. These models have attracted the attention of regulators due to their advanced features and possible exclusion of skilled workers. Providers of such underlying models must apply safety checks, data governance measures, and risk mitigation before publishing their models. In addition, you must ensure that the training data used to inform your system does not violate copyright laws. Providers of such AI models will also have an obligation to assess and mitigate risks to fundamental rights, health and safety, the environment, democracy and the rule of law.
Impact on US business
As the United States seeks to end the chaos wrought by AI, some of the principles of the law can be expected to be reflected in both federal and state legislative proposals. As a result of many years of transatlantic partnerships built on established commerce and trade, many US companies are well aware of the EU’s higher standards in areas such as product safety regulations and specific data rights. . Therefore, we should expect this trend to continue as trade between nations grows. The EU will likely continue to require compliance from US companies to do business across the Atlantic, and the extent of such compliance is expected to involve AI. While there are myriad ways to realize these concepts, such as President Biden’s AI Bill of Rights, each state can develop its own regulatory schemes for the use of AI, reflecting specific provisions of the Act. may be encouraged. As the United States grapples with change, businesses will need to remain diligent about changing regulatory structures and new enforcement mechanisms in the United States. The ultimate goal of the EU proposal is to provide a regulatory framework for AI companies and organizations that use AI, thereby promoting a balance between innovation and protection of public rights.
