A group of leading technology industry leaders warned today that artificial intelligence (AI) could one day pose an existential threat to humanity and should be considered a social risk on par with pandemics and nuclear war.
“Reducing the risk of AI-induced extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” one sentence statement A document released by the Center for AI Safety (CAIS) on May 30 states:
The statement was signed by more than 350 executives, researchers and engineers working in AI, including Sam Altman, CEO of a leading AI company responsible for ChatGPT, OpenAI. Demis Hassabis, CEO of Google DeepMind. Dario Amodei, CEO of Anthropic. and Microsoft executives.
“AI experts, journalists, policy makers, and the public are increasingly discussing a wide range of important and pressing risks from AI,” the nonprofit said. “Still, it can be difficult to raise concerns about some of the most serious risks of advanced AI.”
The statement, signed by hundreds of people today, represents a number of experts and public figures to overcome this obstacle to open debate and take seriously “some of the most significant risks of advanced AI.” The aim is to create a common understanding that there is an increase in
The nonprofit’s mission is to mitigate the societal-scale risks posed by AI. Note Eight examples of “catastrophic or existential risk” include weaponization, misinformation and power-seeking behavior.
The statement comes amid growing concerns about potential harm from AI.
In March, tens of thousands of engineers and researchers signed another public agreement. letter It called for a six-month moratorium on the development of the largest AI models, citing concerns about an “uncontrolled race to develop and deploy ever more powerful digital minds.”
The letter was sponsored by the Future of Life Institute, another AI-focused non-profit organization, and signed by Elon Musk and other prominent tech leaders, but no signatures from major AI labs. There weren’t many.
Altman, Hassabis, and Amodei this month met He discussed AI regulation with President Joe Biden and Vice President Kamala Harris and pledged to continue engaging with the administration to ensure the country benefits from AI innovation.
“To realize the benefits of advances in AI, it is imperative to mitigate both the current and potential risks that AI poses to individuals, societies and national security,” the White House said. “These include risks to safety, security, human and civil rights, privacy, jobs and democratic values.”
The Biden administration also launched a new AI initiative this month aimed at fostering responsible innovation while protecting the rights and security of Americans. create Federal AI policy and first steps to get started National AI Strategy.
A 22-word statement released today and signed by hundreds of people did not provide details on how AI could cause human extinction, but the Cybersecurity and Infrastructure Security Agency’s Gen. Secretary Easterly said. warned He said last month that the U.S. needs to quickly determine the regulatory status of AI technology development, which he said could become the most important and perhaps dangerous technology of the 21st century.
“I think this is the biggest problem we have to deal with this century,” she said. “The most powerful weapons of the last century were nuclear weapons,” Easterly continued. “They were controlled by the government and there was no incentive to use them. It was a disincentive to use them.”
“These are the most powerful technological capabilities and possibly weapons of this century,” she said. “And we don’t have the legal and regulatory systems in place to implement them safely and effectively.”
The letter was delivered on the same day as the opening of the 4th EU-US Trade Technology Council, which convened US Secretary of State Anthony Brinken and other high-ranking world officials in Luleå, Sweden. We plan to discuss the common issues of .
