World leaders still need to wake up to the risks of AI, leading experts say ahead of AI Safety Summit

Machine Learning


This article has been reviewed in accordance with Science X's editorial processes and policies. The editors have highlighted the following attributes while ensuring the authenticity of the content:

fact confirmed

Peer-reviewed publications

trusted sources

proofread


Credit: CC0 Public Domain

× close


Credit: CC0 Public Domain

Leading AI scientists warn that not enough progress has been made since the first AI Safety Summit held at Bletchley Park six months ago, and call for stronger action from world leaders on AI risks. ing.

And world leaders have pledged to govern AI responsibly. But as the second AI Safety Summit in Seoul approaches (May 21-22), 25 of the world's leading AI scientists argue that there is actually not enough to protect us from technology risks. It states that no countermeasures have been taken. In an expert consensus paper published in science, It outlines urgent policy priorities for world leaders to adopt to counter the threat of AI technology.

“At the last AI summit the world agreed that action was needed, but now is the time to move from vague proposals to concrete commitments,” said study co-author Professor Philip Toll from Oxford University's School of Engineering. is coming,” he said. This document provides a number of important recommendations for business and government action. ”

Faced with the potential for rapid advances in AI, the world's response is off track.

According to the authors of the paper, world leaders believe that extremely powerful generalist AI systems that surpass human capabilities across many critical areas could be developed within the current decade or the next. He says it needs to be taken seriously.

They argue that while governments around the world have been discussing frontier AI and attempting to introduce initial guidelines, this is completely out of proportion to the potential for rapid and transformative progress that many experts expect. ing.

Current research on AI safety is severely lacking, with only 1-3% of AI publications estimating safety. Additionally, there are no mechanisms or systems in place to prevent misuse or recklessness, including the use of autonomous systems that can take action and pursue goals on their own.

The world's leading AI experts call for action

In light of this, the international community of AI pioneers has issued an urgent call to action. Co-authors include Jeffrey Hinton, Andrew Yao, Dawn Song, and the late Daniel Kahneman. It will include 25 of the world's leading academic experts in AI and its governance. The authors come from the US, China, the EU, the UK, and other AI powers, and include Turing Award winners, Nobel Prize winners, and authors of standard AI textbooks.

This article is the first time that such a large and international group of experts has agreed on priorities for global policymakers regarding the risks posed by advanced AI systems.

Urgent priorities for AI governance

The authors recommend that governments:

  • Establish immediate specialized agencies to monitor AI and provide these agencies with far more funding than they would receive under nearly all current policy plans. For comparison, the U.S. AI Safety Institute's annual budget is currently $10 million, while the U.S. Food and Drug Administration's (FDA) budget is $6.7 billion.
  • Require more rigorous risk assessments with legally enforceable consequences, rather than relying on spontaneous or incomplete model assessments.
  • AI companies are being asked to prioritize safety and prove that their systems can't cause harm. This includes the use of “safety cases” (used for other safety-critical technologies, such as aviation) that shift the burden of demonstrating safety to his AI developers.
  • Implement mitigation criteria commensurate with the level of risk posed by AI systems. An immediate priority is to set up policies that are automatically triggered when the AI ​​reaches certain functional milestones. Rapid advances in AI will automatically result in stricter requirements, while slower advances will result in correspondingly relaxed requirements.

According to the authors, for the highly capable AI systems of the future, governments must be prepared to take the regulatory lead. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting the development and deployment of systems in response to features of concern, and restricting access. These include mandating controls and requiring robust information security measures against nation-state hackers until adequate protections are in place.

The impact of AI could be devastating

AI is already making rapid advances in critical areas such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems can gain human trust, gain resources, and influence key decision makers.

Algorithms could potentially be copied across a global server network to avoid human intervention. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly.

In an open conflict, AI systems could autonomously deploy a variety of weapons, including biological weapons. As a result, if advances in AI go unchecked, there is a very real possibility that it will lead to large-scale loss of life and the biosphere, and the alienation or extinction of humanity.

British Empire University Professor Stuart Russell, a computer science professor at the University of California, Berkeley, and author of the world's standard textbook on AI, said, “This is a consensus document by leading experts, not a voluntary norm. “We are calling for strict government regulation.” Concerning acts written by the industry.

“It's time to get serious about advanced AI systems. These are not toys. It's completely reckless to enhance their capabilities before we understand how to make them safe. Companies are finding it too difficult to meet regulations. , will complain that “regulation stifles innovation.” That's ridiculous. There are more regulations for sandwich shops than there are for AI companies. ”

For more information:
Yoshua Bengio et al., Managing extreme AI risks amid rapid advances, science (2024). DOI: 10.1126/science.adn0117. www.science.org/doi/10.1126/science.adn0117

Magazine information:
science



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *