UK expands AI safety lab to San Francisco, home of OpenAI

AI News


The U.S. version of the AI ​​Safety Institute aims to hire a team of technical staff led by a principal investigator. The London-based institute currently has a team of 30 people and is headed by Ian Hogarth, a prominent British technology entrepreneur who founded music concert discovery site Songkick.

British technology minister Michelle Donnellan said in a statement that the rollout of the AI ​​Safety Summit in the US “represents the UK's leadership in AI in action.”

“This is a pivotal moment for the UK to explore both the risks and opportunities of AI from a global perspective, strengthening our partnership with the US and building on our expertise as the UK continues to lead the world. It paves the way for other countries to utilize the knowledge.'' AI Safety. ”

This expansion will allow the UK to “leverage the wealth of technology talent available in the Bay Area, collaborate with the world's largest AI research institute headquartered in both London and San Francisco, and collaborate with the United Kingdom to improve AI safety for its citizens.” “This will enable us to strengthen our relationship with the United States,'' the government said.

San Francisco is home to OpenAI, the Microsoft-backed company behind the viral AI chatbot ChatGPT.

The AI ​​Safety Association aims to foster cross-border collaboration on AI safety at a global event held at Britain's Bletchley Park, home of World War II codebreakers. It was founded in November 2023 during an AI Safety Summit.

The AI ​​Safety Institute's expansion to the US comes on the eve of the AI ​​Seoul Summit in South Korea, and was first proposed at the UK summit at Bletchley Park last year. The Seoul summit will be held over Tuesday and Wednesday.

The government said progress has been made in evaluating cutting-edge AI models by several leading companies in the industry since the AI ​​Safety Institute was established in November.

The report said Monday that some AI models have accomplished cybersecurity tasks, but struggle to accomplish more advanced tasks, while some models require only Ph.D.-level knowledge in chemistry and biology. announced that it had been demonstrated.

On the other hand, all models tested by the institute are still highly vulnerable to “jailbreaks,” where users can be tricked into producing responses not allowed by content guidelines, and some models are safe. It is possible to generate harmful output without attempting to circumvent the security measures.

According to the government, the models tested were also unable to complete more complex and time-consuming tasks without a human supervising them.

The name of the AI ​​model tested was not disclosed. The government previously got OpenAI, DeepMind and Anthropic to agree to release much-needed AI models to the government to help inform research on risks associated with their systems.

The development comes as the UK faces criticism for not introducing formal regulations on AI, while other jurisdictions such as the European Union rush to enact laws tailored to AI. .

The EU’s landmark AI law is the first major law on AI of its kind and is expected to become a blueprint for global AI regulation once approved and brought into force by all EU member states. Masu.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *