ICO responds to UK government’s AI regulation plan

Applications of AI

The Information Commissioner’s Office (ICO) welcomed the UK government’s proposal to regulate artificial intelligence (AI), but wondered how regulators should work together and wondered if the proposed AI principles would be consistent with existing data protection rules. I asked for more clarity on how to align.

The government’s AI white paper, published March 29, outlines an “adaptive” approach to regulating AI, arguing that it will foster responsible innovation while maintaining public trust in the technology. increase.

As part of the proposed “innovative” framework, the government will empower existing regulations (ICO, Health and Safety Office, Equality and Human Rights Commission, Competition and Markets Office, etc.) to develop rules customized to the situation. said to create a Good for how AI is used in the sector they are scrutinizing.

These regulators are also expected to jointly develop guidance for companies using AI. They are also expected to run regulatory sandboxes to test AI in real-world situations under their close supervision. Regulatory Cooperation Forum (DRCF).

The whitepaper further outlines five principles that these regulators should consider when fulfilling their supervisory duties. They are safety and security, transparency and accountability, fairness, accountability and governance, and competitiveness and redress.

In response to the white paper, the ICO said it welcomed the government’s intention to implement a unified regulatory approach, but said the government should prioritize research into the kinds of guidance that various AI developers would find useful. I got

“For example, sector- or use-case-specific guidance is likely to be more useful to AI developers than consolidated guidance on each non-statutory principle,” it said.

“The latter may be too high-level and will require extensive interpretation by AI developers to provide practical guidance on the specific issues facing businesses. It may reveal the most useful focus for guidance.”

He added that the respective roles of governments and regulators in issuing guidance also need to be clarified.

The ICO therefore encourages governments to work through regulators, especially the DRCF, to articulate and realize their AI ambitions. The DRCF says it is already doing its part in scanning the horizon in identifying and investigating the impact of new AI applications.

Regarding the proposed AI principles, the ICO already closely parallels those found in the UK data protection framework, but to avoid additional burden and complexity for businesses, there should be a compatible approach. I said that it is important to interpret

As an example, the ICO should expand the concept of fairness to cover all stages of AI system development, and is expected to clarify the route for remediation based on the competitiveness principle. He said it would be necessary to clarify whether it was the regulator or the organization itself.

“Typically, organizations using AI and overseeing their own systems are expected to chart a path to competitiveness and implement it.” We would like to understand if the scope of regulators such as ICOs could be better described as making people more aware of their rights in the context of AI.”

He also added that more clarity is needed on how the AI ​​regulation will interact with Article 22 of the UK’s General Data Protection Regulation (GDPR).

Where decisions involving the use of AI systems have legal or similarly significant implications for individuals, the appropriateness of requiring AI system operators to provide appropriate justification for their decisions to affected parties. The paper notes that regulators are expected to consider. ,” said.

“I would like to stress that when AI systems use personal data, where Article 22 of the UK GDPR applies, it is a requirement that the AI ​​system operator be able to provide justification, not a consideration. We encourage you to clarify this so that it does not cause confusion in the industry.”

Over the next six months, the government will discuss the proposed white paper with various stakeholders, work with regulators to help develop guidance, design and publish an AI regulatory roadmap, and collect research from commissioned research projects. It says it will analyze the results to provide better information. Understanding of regulatory challenges related to AI.

In October 2022, the UK House of Commons Science and Technology Committee will launch an inquiry into UK AI governance to scrutinize whether the government’s proposed approach (now formalized in a white paper) is the right one. Did.

The research focuses specifically on algorithmic bias and the lack of transparency around AI deployments in both the public and private sectors to explore how the public can effectively challenge automated decisions. It is set.

In January 2023, the Senate AI in Weapons Systems Commission will be launched to explore the ethics of developing and deploying autonomous weapons, including how they can be used safely and reliably, the potential for conflict to escalate, and compliance with international law. A commission was established. The first meeting was held at the end of March 2023.

At the beginning of March, the government presented parliament with revised post-Brexit data protection reforms.

Known as the Data Protection and Digital Information Bill, the government will help increase international trade without incurring additional costs to companies already complying with existing data protection rules, and by clarifying the situation, the use of AI technology. said it would increase public confidence in the use of What safeguards apply to automated decision-making?

For example, if an automated decision is made without “meaningful human involvement,” an individual can challenge that decision and instead request another person to review the outcome. However, the government has not specified what meaningful human input will look like.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *