UK government publishes AI regulatory framework

AI For Business

The UK government published a white paper on March 29th, setting out a UK regulatory framework that “encourages innovation” for artificial intelligence (AI). The framework is based on five cross-cutting principles, the implementation of which will be specific to the use of AI rather than the technology itself. The government is not proposing to introduce new regulators or new legal requirements for businesses. Instead, it leverages the existing powers and domain-specific expertise of UK regulators.

the purpose

This framework is designed to achieve three goals:

  1. Promote growth and prosperity: By reducing regulatory uncertainty and removing existing barriers to innovation, the UK government aims to enable AI companies to capitalize on early development successes and achieve long-term market advantage. increase. The government’s proposal clearly has a competitive urgency. Strengthen our current position as the world leader in AI. ”
  2. Increase public trust in AI: By effectively addressing risks, the UK government’s aim is to remove barriers to AI products and innovation caused by a lack of trust in AI.
  3. Strengthening the UK’s position as a global leader in AI: By working with our global partners, the UK government wants to play a key leadership role in shaping international governance and regulation, particularly in the development of the global AI assurance industry.

The government explicitly excludes from the scope of the white paper issues related to access to data, computing power, sustainability, and the balance of rights of content producers and AI developers.

important point

  • Definition of AI: There is no legal definition of AI. Instead, “AI” is defined with reference to the combination of two characteristics: (2) Autonomy—that is, making decisions without explicit human intention or ongoing control. Defining AI in terms of functional capabilities aims to future-proof the framework against new technologies that are autonomous, adaptive and unexpected.
  • Context-specific – Regulating usage, not technology: The framework regulates the outcomes that AI is likely to produce. This approach can extend to the failure of using AI. The government highlighted feedback from regulators that not leveraging AI capabilities could actually do harm, such as not using AI in safety-critical processes.
  • Five cross-cutting principles: When implementing a context-specific approach, regulators should consider five cross-cutting principles, as further explained below.
  • No new legal requirements: The government has said it will not introduce new legal requirements. However, after an unspecified implementation period, governments may introduce statutory mandates that require regulators to consider cross-cutting principles. That this is the only (potential) new statutory requirement rather than an obligation directly aimed at businesses is a clear sign of the government’s growth-enhancing goals.
  • No interference with responsibility or accountability in the AI ​​supply chain: The government concludes that it is premature to make cross-cutting decisions on responsibilities in the AI ​​supply chain. For example, data controllers and data processors are assigned specific accountabilities under data protection law, and likewise producers and distributors are assigned accountability under product safety law. . The government has left the issue to regulators, which it says are best placed to start assigning responsibilities in each sector, adopting a context-based approach based on best practices. increase.
  • New centralized adjustment function: The government will establish cross-sectoral oversight, risk assessment, education, horizon scanning, and other centralized functions to support implementation and consistency of the Framework.
  • AI assurance technology and technical standards: The Government has indicated that these will play an important role in supporting the Framework and will promote it by working with industry to publish an assurance technology toolkit.
  • Regional application: This framework applies across the UK and does not change the local application of existing legislation. The UK government will work with international partners to promote interoperability and consistency between different approaches, keeping in mind the complex and cross-border nature of AI supply chains.

cross-sectoral principles

The principles of the regulatory framework are detailed below.

  1. Safety, security and robustness: AI systems must function as intended in a robust and secure manner throughout the AI ​​lifecycle, and risks must be continuously identified, assessed, and managed. Safety-related risks are sector-specific and regulators need to take a commensurate approach to managing them. Regulators may require corresponding AI lifecycle actors to periodically test or perform due diligence on system functionality, resilience, and security.
  2. Appropriate transparency and explainability: Transparency refers to the proper communication of information about AI systems, and explainability refers to the extent to which stakeholders can access, interpret, and understand the decision-making process. Parties directly affected by the use of AI systems must also have access to sufficient information about AI systems to enable them to exercise their rights. Regulators may implement this principle through regulatory guidance.
  3. fairness: AI systems undermine the legal rights of individuals and organizations, unfairly discriminate against individuals, and produce unfair market outcomes (equality and human rights, data protection, consumer law, financial regulation, etc.). Do not. Regulators can implement this principle through a combination of guidance (sector-specific and joint), technical standards and assurance methodologies, as well as implementation of existing statutory mandates.
  4. Accountability and governance: Companies must establish clear accountability across the AI ​​lifecycle and put in place governance measures to effectively oversee the supply and use of AI systems. Regulators may implement this principle through regulatory guidance and assurance techniques.
  5. Competitiveness and Redemption: Users, affected third parties, and parties in the AI ​​lifecycle must be able to challenge AI decisions or results that are harmful or create significant risk of harm. Regulators are expected to clarify existing routes to competitiveness and remediation and, where necessary, take appropriate steps to ensure that the results of AI use are competitive. The government’s first non-statutory approach does not create new rights or new avenues of remediation at this stage.

next step

The government has until June 21, 2023 to solicit views on specific proposals, including cross-cutting principles. The white paper also includes a long list of actions the UK government will take over the next year. This includes:

  • Publishing a portfolio of AI assurance technologies
  • Publishing AI Regulatory Roadmap for Central Risk and Oversight Functions
  • Encourage regulators to issue guidance on how cross-sectoral principles are applied within their mandates
  • Publish draft central cross-economy AI risk register for consultation

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *