Hugh Roberts examines the different approaches the UK and EU are taking to regulate in response to AI’s rapidly evolving technological landscape.
Artificial intelligence (AI) is increasingly impacting every aspect of our lives. OpenAI’s ChatGPT has become the fastest growing Internet application of all time, with companies ranging from Microsoft to Duolingo integrating it into their products.
While AI technologies offer benefits, they also pose significant risks, and both the EU and the UK have shown strong interest in regulating these technologies. Efforts to regulate AI are underway, but the approaches taken by the UK and her EU are very different.
european union
In April 2021, the European Commission proposed the AI Law, a bill that would set out the rules for managing AI within the EU. The draft will then be revised by the EU Council and parliament, with a final text expected to be agreed by the end of 2023 or early 2024.
The Draft AI Bill is a “horizontal” regulation, meaning it lays down rules for AI across all sectors and applications. Establish four risk levels for AI: unacceptable risk, high risk, limited risk, and minimal risk. Different rules apply depending on the level of risk the system poses to fundamental rights.
AI deemed to pose unacceptable risks, such as real-time remote facial recognition systems used in public spaces, will be banned. High-risk systems, such as those used in critical infrastructure, are subject to several requirements, including conformity assessment. Limited and minimal risk systems are subject to transparency requirements and voluntary guidance, respectively.
The EU’s horizontal approach provides comprehensive rules for AI, but its rigidness is a drawback for a rapidly changing field like AI. In particular, the proposed risk framework may struggle to adapt to new developments.
This problem is already becoming a reality. Since the first draft of the AI law was published, OpenAI’s ChatGPT-like “foundation model” has evolved significantly. These models are designed to be trained on a wide range of data and easily adapted to multiple tasks. For example, ChatGPT can be used to generate benign text such as football chants for new signatures, or it can be used for malicious purposes such as generating text for advanced phishing attacks.
The underlying model complicates the first risk-based framework proposed by the EU. Because this framework was designed to regulate AI trained to complete specific tasks, such as the systems used to screen CVs and facial recognition cameras. As a result, the first draft of the AI Law placed relatively few restrictions on underlying models, given the significant risks they pose.
The EU Council and Parliament have made efforts to update the task-specific risk framework in light of the underlying model, but it is doubtful that these later-considered revisions will adequately address the full range of hazards. .
Despite these challenges, the EU AI law appears to have international impact. The EU’s market size and regulatory capacity encourage companies to develop and offer his EU-compliant products. For many kinds of AI systems, especially those that are hard to change based on where they are deployed, companies are more likely to simply adopt his EU rules internationally.
England
The UK has taken a different approach to AI regulation, favoring a ‘vertical’ strategy that considers the impact of AI on its jurisdiction and relies on existing regulators. This position was first presented in his 2018 and subsequently confirmed in the recently published AI Regulation White Paper. However, after receiving industry feedback highlighting the risk of inconsistencies, duplications and gaps in this approach, the government has decided to support regulatory coordination and establish a series of central regulatory agencies to monitor cross-sectoral risks. proposed a feature.
The rationale behind the UK’s vertical approach is to limit new regulatory burdens that could stifle innovation, while providing sufficient flexibility to deal with new technological advances. . Given the difficulties the EU is facing in updating its regulatory framework, this position has certain advantages.
The main drawback of the UK approach remains ambiguity as to how it will actually be enacted. Regulators have not been given new powers or funding to help address AI harm. Little detail is also provided as to what the central government’s support function will be. As such, it is unclear whether regulators have the resources to address emerging risks, especially underlying models with cross-cutting implications.
Despite this domestic ambiguity, the UK has been able to tout its credentials as an international leader in AI regulation. Over the past few months, Rishi Sunak has adopted AI as a key international policy priority. He announced that the UK would host the first global summit on AI safety, while promoting London as the home of the global AI regulator.
The type of harm Sunak focuses on is one that has received less attention in EU AI law. The Global AI Summit centers on long-term AI safety issues, explicitly mentioning threats that could “endanger humanity.” This is a major shift in UK policy that more than a year ago called for regulators not to focus on this kind of “hypothetical risk”.
UK leadership prospects
In the future, the EU and the UK may be able to demonstrate synergistic leadership. EU regulation provides a strong foundation to address already emerging harms, such as AI bias. The UK tone, on the other hand, seems to lean toward agile leadership, especially when it comes to responding to problems. long term risk For cutting-edge systems not adequately addressed in the AI bill.
This kind of complementary governance is desirable, but it’s not easy. Going forward, there are two key risks to UK AI leadership. First, an overly focused focus on long-term risks, without regulators helping to address current harms, can undermine domestic governance efforts. This threatens the UK’s international credibility as a leader in AI.
Second, the international community may not support the UK’s AI leadership. The EU and US are already coordinating many aspects of AI policy through the Trade and Technology Council, but the UK is excluded from that council. International organizations such as the OECD, UNESCO and the Global Partnership on AI are already working towards an international agreement on AI.
To overcome these risks, the UK needs to turn regulatory rhetoric into reality. Only then can we become credible international leaders in AI governance.
By Hugh Roberts, Researcher in AI and Sustainable Development, University of Oxford
