Government officials and AI industry executives agreed Tuesday to apply basic safety measures in the fast-changing field and establish an international safety research network.
The UK and South Korea will host the AI Safety Summit in Seoul this week, nearly six months after the first World Summit on AI Safety was held at Bletchley Park in the UK. This gathering highlights the new challenges and opportunities facing the world with the emergence of AI technologies.
The UK government on Tuesday announced a new agreement between 10 countries and the European Union to establish an international network similar to the UK's AI Safety Institute, the world's first publicly funded body, to accelerate progress in AI safety science. announced. The network fosters a common understanding of AI safety and aligns that effort with research, standards, and testing. Australia, Canada, the European Union, France, Germany, Italy, Japan, Singapore, South Korea, the United Kingdom and the United States signed the agreement.
On the first day of the AI Summit in Seoul, world leaders and leading AI companies gathered in a virtual conference chaired by UK Chancellor Rishi Sunak and South Korean President Yoon Seok-youl to discuss AI safety, innovation and inclusion.
During discussions, the leaders agreed to the broader Seoul Declaration, which addresses major global issues, upholds human rights and supports the world's He emphasized strengthening international cooperation in building AI to bridge the digital divide. ”
“AI is a hugely exciting technology and the UK is leading the global effort to address its potential, hosting the world's first AI Safety Summit last year,” Sunak said in a UK government statement. “But to reap the benefits, we need to ensure safety. That’s why we are pleased to have reached agreement today on the AI Safety Association Network.”
Just last month, the UK and US signed a Memorandum of Partnership to collaborate on research, safety assessment and guidance on AI safety.
The agreement announced today is the world's first AI agreement between 16 AI companies including Amazon, Anthropic, Cohere, Google, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, Samsung Electronics, Technology Innovation Institute, and xAi. It is based on our safety initiatives. and Zhipu.ai. (Zhipu.ai is a Chinese company backed by Alibaba, Ant, and Tencent.)
AI companies from countries including the US, China and the United Arab Emirates (UAE) have agreed to safety commitments to “not develop or deploy any models or systems where mitigation measures cannot reduce the risks below a threshold,” the statement said in response to the UK government's statement.
“This is the first time in the world that so many leading AI companies from so many regions of the globe have agreed to the same commitment to AI safety,” Sunak said. “These efforts will enable the world's leading AI companies to ensure transparency and accountability for their plans to develop secure AI.”