Stay informed with free updates
Just sign up for artificial intelligence myFT Digest — delivered straight to your inbox.
The UK and South Korean governments announced that major artificial intelligence companies have signed up to a new voluntary initiative on AI safety.
Tech giants Amazon, Google, Meta, and Microsoft, as well as companies such as Sam Altman's OpenAI, Elon Musk's xAI, and Chinese developer Zhipu AI, are evaluating how their “frontier” AI models are risk-setting. We plan to publish a framework that outlines how to measure this.
Ahead of the World AI Summit in Seoul on Tuesday, the two governments announced that they had pledged to “no longer develop or deploy models” unless serious risks can be mitigated.
The announcement builds on the so-called Bletchley Declaration, made at the first AI Safety Summit hosted by British Prime Minister Rishi Sunak in November.
“These commitments will ensure the world's leading AI companies are transparent and accountable about their plans to develop safe AI,” Sunak said in a statement. “This will set a precedent for global standards on AI safety, unlocking the benefits of this transformative technology.”
A communique outlining the agreement says AI companies will “assess the risks posed by their frontier models and systems.” . . This includes before deploying that model or system, and optionally before and during training. ”
The companies will also define “thresholds at which significant risks posed by a model or system are deemed intolerable unless appropriately mitigated” and how such mitigation measures will be implemented. is.
“The field of AI safety is rapidly evolving, and we are particularly pleased to support efforts focused on refining approaches in parallel with the science,” said Anna Makanju, global vice president at OpenAI. ” he said.
“We remain committed to working with other research institutions, companies, and governments to ensure that AI is safe and benefits all humanity.”
Tuesday's announcement mirrors a “voluntary commitment” made at the White House last July by Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft and OpenAI to “support the transition to safe, secure, and transparent AI technology development.”
However, it remains unclear how companies will be held accountable if they fail to fulfill their promises.
In Tuesday's communiqué, the 16 companies agreed to “provide public transparency” regarding the implementation of their commitments, “unless doing so would increase risk or divulge sensitive commercial information disproportionate to the public interest.” ” he said.
Speaking in Seoul on Tuesday night ahead of a virtual summit, UK Science Secretary Michelle Donnellan told the Financial Times that the voluntary agreement struck at Bletchley was working.
“Therefore, we believe these agreements will continue to deliver benefits once again,” Donnellan said.
“But this is not just about what more companies can do, it's also about what more countries can do,” she added. Mr Donnellan confirmed that a representative of the Chinese government would attend the meeting, which will be held on the second day of the summit on Wednesday.
Dan Hendricks, a safety advisor at xAI, said the voluntary initiative would help “lay the foundation for concrete national regulations.”
But Mr Donnellan reiterated the UK's position that it is too early to consider legislation to enforce AI safety.
“We need to understand risks better,” she said, adding that the UK government will provide up to £8.5 million in grants to research AI-related risks such as deepfakes and cyber-attacks. He pointed out that.
He added that if the UK government had introduced legislation on this issue last year, “it would probably have been outdated by the time it was published”.