Seoul (AFP) – More than a dozen of the world's leading artificial intelligence companies made new safety pledges at a global summit in Seoul on Tuesday, the British government said in a statement.
Publication of:
2 minutes
The agreement with 16 tech companies, including ChatGPT maker OpenAI, Google DeepMind and Anthropic, builds on agreements reached at the first Global AI Safety Summit held at Bletchley Park in the UK last year.
“These commitments will ensure that the world's leading AI companies provide transparency and accountability for their plans to develop secure AI,” British Prime Minister Rishi Sunak said in a statement released by the Department for Science, Innovation and Technology. is guaranteed.”
Under the agreement, AI companies that have not yet shared how they assess risk for their technology will make their framework public, according to a statement.
These include what risks are deemed “unbearable” and what companies will do to avoid exceeding these thresholds.
“In the most extreme circumstances, the companies also commit to 'not developing or deploying any models or systems at all' if mitigation cannot reduce the risk below a threshold,” the statement added.
The definition of these thresholds will be decided ahead of the next AI Summit to be hosted by France in 2025.
Companies that have agreed to safety rules include US tech giants Microsoft, Amazon, IBM and Instagram's parent company Meta. French Mistral AI. and Zhipu.ai in China.
The dangers of “deepfakes”
ChatGPT was a huge success shortly after its release in 2022, sparking a generative AI gold rush, with technology companies around the world pouring billions into developing their own models.
Generative AI models can generate text, photos, audio and even video from simple prompts, and their supporters have hailed them as a groundbreaking model that could improve lives and businesses around the world.
But critics, rights activists and governments have warned that it could be misused in a variety of situations, including manipulating voters with fake news articles and so-called “deepfake” photos and videos of politicians.
Many are calling for international standards to govern the development and use of AI, and are calling for action at summits such as this week's two-day meeting in Seoul.
In addition to safety, the Seoul summit will discuss how governments can help foster innovation, including AI research in universities.
Participants will also consider how to ensure technology is accessible to all and helps tackle issues such as climate change and poverty.
The Seoul summit comes just days after OpenAI confirmed it had disbanded a team dedicated to mitigating the long-term risks of advanced AI.
“The field of AI safety is rapidly evolving, and we are particularly pleased to support efforts focused on refining approaches in parallel with the science,” said Anna Makanju, Global Vice President, OpenAI. he said in a statement announcing the new initiative on Tuesday.
The two-day summit will be held partially virtually, with a mix of closed sessions and public sessions in Seoul.
South Korean President Yoon Seok-yeol and Britain's Sunak are scheduled to co-chair a virtual session of world leaders later on Tuesday.
© 2024 AFP