UK AI Bill Plans Explained

AI News


The AI ​​bill was formally announced in a King Speech on Wednesday as Keir Starmer's Labour government moves away from the wait-and-see approach of the previous Conservative government.

During the general election campaign, Labour made several promises to bring forward legislation to ensure AI safety, and Royal Assent during this sitting marks the first concrete step towards the UK establishing a new AI regulatory regime.

What do we know so far?

In his King's Speech, King Charles III said the government would “seek to bring in the right legislation to impose requirements on those working on developing the most powerful artificial intelligence models”.

This is not an explicit commitment to introduce an AI Bill, as previously expected, but it does signal Labour's desire to get started on a complex area that will affect UK AI startups.

Labour's manifesto included a brief mention of its AI plans: ahead of the general election, the party said it would introduce “binding regulation of the small number of companies developing the most powerful AI models”.

The manifesto also said Labour would ban “the creation of sexually explicit deepfakes”.

Technology Minister Peter Kyle detailed Labour's position on AI. In an interview with BBC News, Kyle said that as Shadow Technology Minister, a Labour government would implement a “statutory code” that would require companies developing AI to share safety testing data with the government and the AI ​​Safety Institute.

This would be a tougher approach than the previous administration, which relied on voluntary, non-binding agreements from tech companies on AI safety.

Under the Conservative government, the AI ​​Safety Institute receives information from some AI developers, but there is no legal obligation for companies such as OpenAI or Microsoft to give the AI ​​Safety Institute or other parts of government access to safety information.

Speaking at a policy event hosted by industry group Tech UK in February, Kyle said Labour would create a “regulatory innovation office” to make regulators faster and more adaptable to new technologies.

Treasury Secretary Darren Jones previously said: England Existing regulators “do not have the capacity” to oversee AI regulation and “formal coordination” is lacking.

Many questions remain about the details, including whether the government will support an open-source requirement for AI models and the legislative timeline for the bill.

How does EU AI law apply?

Details of the UK's proposed AI bill are still limited, but officials will likely closely consider the EU's proposed AI bill, which was approved in March and would set out binding rules for AI developers.

The EU AI law divides risk into four levels: minimal, limited, high and unacceptable, with AI uses in the unacceptable category, such as deliberate misinformation, social scoring and web scraping of facial images, banned outright.

The UK has an existing legal framework regarding certain areas covered by recent EU law, notably the use of facial recognition technology outside law enforcement.

The UK AI bill is likely to borrow elements of EU AI law, such as requiring developers to keep detailed logs of safety testing to share with regulators.

What does the tech industry think?

Many in the tech industry, including the AI ​​industry, are pleased that the legislation is moving forward, but few are hopeful that it will be implemented very quickly.

“There are no answers up front and it's clear that there is some risk, but the fact that we're talking about it, that research on AI explainability is continuing, and the fact that legislation is being drafted – these are all encouraging,” said Jennifer Belisent, principal data strategist at Snowflake.

But as with regulating emerging technologies, binding rules risk stifling innovation, and Luminance CEO Eleanor Lightbody said that given the “multifaceted” nature of AI, blanket regulation would be ineffective.

“AI techniques are diverse, and the applications of large-scale language modeling are varied. A one-size-fits-all approach to regulating AI would be rigid and risk quickly becoming outdated given the pace of AI development,” Lightbody said.

Ekaterina Almask, general partner at technology venture capital firm Open Ocean, said the “previous administration's light touch approach had merit” but that UK legislation was needed as other international organisations developed their own systems.

Almask said that if the UK were to harmonise its laws to some extent with those of the EU and the US, it could “facilitate an interoperable reporting system and provide a clear roadmap for AI companies operating in the UK”.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *