AI Regulation Tracker: UK and EU Take Different Approaches to AI Regulation | Brian Cave Leighton Paysner

AI For Business


artificial intelligence (“AI”), once confined to the pages of science fiction, is now seen as a key strategic priority for both the UK and the EU.

The UK, in particular, plays an important role on the technological frontier, ranking third globally for private investment in AI companies in 2020.1 followed by the United States and China. Proposals for AI regulation in the UK and EU appear to be at different stages and take different forms. While the UK is proposing a light regulatory approach to encourage innovation, the EU is focused on establishing clear rules, hoping to attract more investment through regulatory clarity.

As companies increasingly integrate AI into their products, services, processes and decisions, it must do so in a manner that complies with the various UK and EU regulatory approaches.

As with most new technologies, the establishment of regulatory and compliance frameworks has lagged behind the rise of AI. That is likely to change as the UK government further clarifies its approach to sectoral regulation of AI and as the EU enters the final stages of finalizing its ambitious AI law. Additional EU legislation on AI liability is also in preparation.

Our AI Regulation Tracker provides the latest information on legislation, when passed, that will directly affect the development or deployment of AI solutions for businesses in the UK and EU.2

BCLP actively tracks proposed and enacted AI legislation to help clients stay up to date in this rapidly changing regulatory landscape. We actively track proposed and enacted AI legislation across the US, in addition to the UK and EU. Explore an interactive map of AI law across the United States.

On 21 April 2021, the European Commission published a draft regulation that would set out harmonized rules for AI across the European Union (“AI law”).Regulatory status means that once the AI ​​Law is finalized and comes into force, it will apply directly to each of the 27 EU Member States.

The AI ​​Law applies to:

  • Providers that market or launch AI systems within the EU, regardless of where the provider is established.
  • Users of AI systems located within the EU.and
  • Providers and users of AI systems located in third countries. The output of the system is used within the EU.

Therefore, the AI ​​law will have broad extraterritorial application and need to be considered by providers and users of AI systems around the world.

A “risk-based” approach to regulating AI

The AI ​​Act takes a “risk-based” approach, classifying AI systems into distinct layers, such as:

  • Ban;
  • high risk;
  • Limited risk.
  • All other systems (effectively minimal risk).

The rules that apply to AI systems depend on whether the AI ​​system falls into a particular tier.

1. Prohibited (including AI systems that use technology to manipulate and harm individuals):

The European Commission says AI systems of this kind pose a “clear threat to people’s safety, livelihoods and rights”. As originally drafted, Section 5 imposed a ban on systems that use subliminal technology to materially distort user behavior in harmful ways. Exploit vulnerabilities in specific groups. Allowing “social scoring” by the government or using biometrics in public for law enforcement purposes (subject to narrow exceptions).

A later draft of the AI ​​law published by the Joint Committee of the European Parliament (“EP”) on May 9, 2023, it has proposed several significant changes, indicating that it is still too early to reliably assess the impact of the final AI law. For example, EP proposes that an AI system must have the following purposes or effects to fall within the prohibited tier:Significantly distorts the behavior of an individual or group of individuals by significantly impairing the individual’s ability to make informed decisions, in a manner that that person is or is likely to cause, or otherwise cause serious harm to another person or group of people”.

The EP proposal also bans the use of “real-time” biometric systems in public places (i.e., these systems can only be used by law enforcement “post-incident”, not pre-judicial). means that it can only be used with permission and in connection with serious crimes) ).

The scope of “forbidden” systems was further expanded by the EP’s proposal to include systems used for predictive policing. Scraping facial images to extend the facial recognition database. Guessing emotions in the fields of law enforcement, border control, workplaces and educational institutions.

2. High risk (i.e. the system is subject to additional safeguards including human oversight):

These systems include systems used for:

  • Biometric authentication and classification of natural persons.
  • Management or operation of critical infrastructure.
  • education and vocational training,
  • Employment and worker management.and
  • Access to and enjoyment of essential public and private services, including creditworthiness and credit scoring.

These systems are subject to stringent additional obligations, such as the need to undergo a “conformity assessment”, pre-registration regimes, appropriate risk management and mitigation systems, and ensuring adequate human oversight. Requirements vary depending on whether high-risk AI is embedded as part of a broader system (such as a medical device) or independently.

In a draft of 9 May, the European Parliament proposed limiting the classification of “high risk” systems to those systems that pose a “significant risk” to human health, safety or fundamental rights (safety components different rules apply).

3. LIMITED RISK:

Although not “high risk”, certain AI systems that may interact with or manipulate human behavior, such as chatbots and emotion recognition systems, are subject to specific transparency obligations. increase. For example, users should be made aware that they are interacting with a human. machine.

4. All other AI systems (low risk):

These are systems that minimize risks to user rights and safety, such as AI-enabled spam filters. No specific requirements are proposed in the original text here.

5. Basic model:

The 6 May draft of the EP introduces additional obligations on the ‘basic model’. The underlying model is defined as:AI models trained at scale on a wide range of data, designed for output versatility, and adaptable to a wide range of specific tasksThese proposals are a response to the popularity of underlying models such as GPT, upon which ChatGPT is based.

What are the possible fines?

Fines are proposed on a sliding scale, with the most serious violations (including breaches of obligations applicable to prohibited high-risk systems) subject to fines of up to:

  • 30 million euros.again
  • 6% (whichever is higher) of global annual sales for the previous fiscal year.

When will the AI ​​Act come into force?

The European Council issued a ‘general position’ in December 2022, which included a number of changes to the text of the bill. The EP he plans to formally unveil his own revision at the plenary session in June, after which the “trilogue” phase will begin. During the tripartite meeting, the Council and the European Commission will negotiate to reach agreement on proposed changes to the text.

Once the normal EU legislative process is completed (which is expected to take several months), the AI ​​law will come into effect after a 24-month transition period. As a result, the AI ​​law is unlikely to apply until 2025 at the earliest.

First proposed by the European Commission on 28 September 2022, the Draft AI Liability Directive (“AI Responsibility Directive”) aims to modernize the EU liability framework by introducing rules specifically for harm caused by AI systems.AI law”). The EU product liability regime is being updated in parallel.

Inference of causality

The AI ​​Liability Directive establishes rebuttable presumptions of causation, including failure to comply with a duty of care (i.e., negligence) under Union or national law and the consequences of (a) the output produced by an AI system, or (b) an AI system. failed to produce output and caused associated damage. This estimate applies only if the following conditions are met:

  • The complainant proves that the damage was caused by non-compliance with a specific EU or national obligation relating to the damage of the AI ​​system (this may include non-compliance with provisions of AI law). ).
  • Based on the circumstances of the case, it is reasonably possible that the defendant’s negligent acts affected the output produced by the AI ​​system, or that the AI ​​system failed to produce the output that caused the related damage. .and
  • Plaintiffs demonstrate that the output produced by the AI ​​system, or the inability of the AI ​​system to produce the output, caused the damage.

This estimate also depends on whether the AI ​​system is “high risk”. Different rules apply in both cases. A defendant may rebut the presumption of causation by, for example, proving that the defendant’s negligence may not have been responsible for the related damages.

disclosure of evidence

The AI ​​Liability Directive will also empower national courts in EU Member States to order the disclosure of evidence relating to high-risk AI systems under certain circumstances.

When does the AI ​​Liability Directive come into effect?

Timing is difficult to predict at this time. Once the directive comes into force, the draft gives her EU member states another two years to transpose the final requirements into national law.

UK Government Department for Science, Innovation and Technology (“DSITMore”) is a white paper published on March 29, 2023 (“AI WhitepaperThe proposal represents a lighter approach, consistent with the UK’s National AI Strategy announced in September 2022. No bill is currently proposed, which is in stark contrast to the approach taken in the EU.

The AI ​​whitepaper proposes a flexible definition of AI systems based on their inherent adaptability and autonomy characteristics. It also proposes a principled framework for existing regulators to interpret and apply AI within their mandate. Regulators are also expected to issue guidance on how the Principles interact with existing legislation to assist compliance in each area. This reflects the UK government’s view that AI is a general purpose technology and will likely traverse a number of regulatory powers (suggesting that cooperation between regulators is likely to be fundamental). increase).

Here are five cross-cutting principles that regulators should apply:

  1. Safety, Security, Robustness.
  2. Appropriate transparency and explainability.
  3. fairness.
  4. Accountability and Governance.and
  5. Objections and Remedies.

Initially, principles are not set on a legal basis. Legislation may be passed in the future that would require regulators to consider the principles if they are found not to be properly applied. AI assurance technologies and technical standards are also expected to play a major role.

The AI ​​White Paper does not seek to assign liability for damages caused by AI. The issue will be left to the existing legal framework and subject to further scrutiny. However, the possibility of future legislative intervention is not ruled out.

(1) https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version

(2) We have focused on AI-specific regulations. In the UK and EU, the framework established by the General Data Protection Regulation (GDPR) in 2018 regulates the use of personal data (including biometric data) for profiling and automated decision-making. Although AI and automation systems are becoming more integrated, it is important to note that not all automated decision-making systems contain AI (or personal data). The UK data protection framework is currently undergoing reform. Some of the proposals are also aimed at facilitating the use of personal data in connection with AI systems.

[View source.]



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *