Will artificial intelligence pose as an existential threat to humanity as nuclear war or a pandemic? A new statement signed by AI scientists and other celebrities says AI should be treated as such.
“Reducing the risk of AI-induced extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said. The 22-word statement, issued by the San Francisco-based nonprofit Center for AI Safety (CAIS), aims to spark a global discussion about the impending risks of AI. It says.
Topping the list of signatories is AI research pioneer Jeffrey Hinton, who recently left Google to be more free to sound the alarm about the obvious and current dangers posed by AI. These things are getting smarter than us.”
Hinton, along with fellow proponent Joshua Bengio, is one of the founders of deep learning methods employed in large-scale language models such as GPT-4. Notably, Yann LeCun, chief AI scientist at Meta, his third member of the Turing Award-winning research team, is not included in the list.
The statement was also signed by several CEOs of major AI players, including Sam Altman of OpenAI, Demis Hassabis of Google DeepMind and Dario Amodei of Anthropic.
A section of the CAIS website lists potential AI risks including weaponization, misinformation, deception, and power-seeking behavior. Regarding the potential for AI to seize power, the website says: “AIs that have gained great power can be particularly dangerous if they are inconsistent with human values. or overwhelm the monitor, etc. From this point of view, inventing a machine more powerful than us is playing with fire.”
The organization also argues that building this power-hungry AI could be encouraged by political leaders who see its strategic advantages, stating that “whoever becomes a leader [AI] will rule the world. ”
The CAIS statement is the latest in a series of high-profile efforts focused on addressing AI safety. Earlier this year, a controversial open letter, backed by some of the same people who backed the current warning, calling for a six-month pause in AI development had mixed reactions within the scientific community. Critics argued that the letter either exaggerated the risks posed by AI, or, conversely, agreed with the potential risks but disagreed with the proposed solutions.
The Future of Life Institute (FLI) authored an earlier open letter, stating the CAIS Statement: To mitigate AI risk. ”
The FLI is committed to addressing this risk, particularly through the development and enactment of international agreements to limit the spread of high-risk AI and mitigate the risks of advanced AI, and the establishment of intergovernmental bodies such as the International Atomic Energy Agency (IAEA). It recommends a course of action for mitigation. Promote the peaceful use of AI while reducing risk and ensuring guardrails are enforced.
Some experts argue that these open letters are misplaced and that AGI (autonomous systems with general intelligence) is not the most pressing concern. Emily Bender, a professor of computational linguistics at the University of Washington and a member of the infamous AI ethics team fired by Google in 2020, said: Tweet He said the statement was “a wall of shame where people voluntarily add their names.”
“We should be concerned about the harm that actually happens. [corporations] And the people making it are not doing it in the name of “AI”, [about] Skynet,” she wrote.
One of these harms can be seen in the example of an eating disorder helpline that recently laid off its human team to employ a chatbot called Tessa. This helpline has been active for her 20 years and is run by the National Eating Disorders Association.report from deputy He noted that after NEDA officials moved to unionize early last month, the association announced it would replace the helpline with Tessa as the group’s primary support system.
Tessa was taken offline by the organization just two days before the handover began because the chatbot encouraged harmful behaviors that could exacerbate her eating disorder, such as strict calorie restriction and daily weighing.
This article first appeared on our sister site Datanami.
Related