In a “historic first”, 16 global AI companies have signed a new initiative to securely develop AI models.
The announcement was made during the Virtual AI Seoul Summit, the second event on AI safety co-hosted by the UK and South Korea on May 21st and 22nd.
Signatories of the Frontier AI Safety Commitments include some of the nation's largest technology giants, including Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI.
These also include AI organizations in Europe (Cohere and Mistral AI), the Middle East (G42 and Technology Innovation Institute), and Asia (Naver, Samsung, Zhipu.ai).
AI risk threshold to be determined in France
These organizations have pledged to publish a safety framework for how they measure risk in frontier AI models, including examining the risk of misuse of the technology by malicious parties.
The framework also outlines when a material risk will be deemed 'unbearable' if it is not adequately mitigated, and what steps companies will take to ensure that the thresholds are not exceeded.
In the most extreme circumstances, the companies have also committed to “not developing or deploying any models or systems at all” if mitigations fail to reduce risk below certain agreed upon thresholds.
The 16 organizations agreed to work with multiple stakeholders, including governments, to define these standards ahead of the AI Action Summit to be held in France in early 2025.
Professor Yoshua Bengio, a world-leading AI researcher, Turing Award winner, and lead author of the International Scientific Report on Safety in Advanced AI, is leading AI companies around the world to move forward on Frontier AI Safety. He said he was happy to see people signing up. promise.
“In particular, we welcome companies' efforts to halt models that pose extreme risks until they are safe, and the steps they are taking to increase transparency in their risk management practices,” he said.
Emerging global AI safety governance system
These commitments come on top of previous agreements signed with major AI technology companies during the first AI Safety Summit held at Bletchley Park in November, as well as other voluntary commitments from the United States and the Hiroshima Code of Conduct. is based on existing commitments.
The November 2023 Bletchley Declaration saw eight of the 16 current AI companies agree to “deepen” access to future AI models before going public in November 2023. .
Read more: 28 countries sign Bletchley Declaration on the responsible development of AI
While the initial list was focused on Western countries, the newly expanded list of signatures includes France's “OpenAI killer” Mistral AI, one of the largest large-scale language models (LLMs), Falcon, and China. The company is Zhipu.ai.
Commenting on the news, British Prime Minister Rishi Sunak said: “This is the first time in the world that so many leading AI companies from different parts of the world have all agreed to the same commitment to AI safety. This will set a precedent for global standards for AI safety.”
Ya-Qin Zhang, professor and director of the AI Industrial Research Institute at Tsinghua University in China, strongly welcomed the initiative.
“These efforts by a diverse group of Chinese, US, and international companies represent an important step forward in AI risk management and safety process transparency,” Zhang said.
“While this voluntary initiative will clearly need to be accompanied by other regulatory measures, it is nevertheless an important step forward in establishing an international governance regime to promote AI safety,” Bengio said. ” concluded.
Going beyond “empty” promises to ensure AI safety
However, the agreement has also been criticized.
Jamie Moles, Senior Technical Manager at ExtraHop commented: While safety frameworks sound great in theory, the vagueness of principles such as safety, reliability, and ethics is a far cry from the harmful uses of AI we see every day. ”
“Companies need to drop the grandiose claims and start a real dialogue with cybersecurity experts,” he said.
“AI can be used for many noble purposes, especially in the field of cybersecurity, but without clear limits and regulations that hold companies accountable, it will be used for malicious purposes, as we already see all too often.”
Ivana Bartoletti, Wipro's global chief privacy and AI governance officer and an expert on AI and human rights at the Council of Europe, also expressed mixed feelings about the AI Seoul Summit and the efforts of AI developers.
“The pre-summit report is a welcome break from the usual speculation and caution. But it is not enough and we also need to make progress on governance. Having multiple AI safety agencies is “It is commendable, but we need to clarify its function,” she said.
Read more: UK and US forge common approach to AI safety