Representative image.Photo: Mohammed Nohasi/Unsplash
T.he warns “Reducing the risk of AI-induced extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The pivotal statement, released in May by the nonprofit AI Safety Center, was signed by many influential people, including sitting lawmakers, former state Supreme Court justices, and tech industry executives. Among the signatories were many of the very people who are developing and deploying artificial intelligence today. Hundreds of co-sponsors from academia, industry and civil society call themselves “AI scientists”.
Should we be concerned that those designing and deploying AI are, like many modern-day Oppenheimers, warning of existential risks? Yes, but not for the reasons the signatories imagine.
As a law professor specializing in AI, I know and respect many of the people who signed this statement. I consider some of them to be my mentors and friends. I think most of them are genuinely concerned that AI poses the same level of extinction risk as a pandemic or nuclear war. But almost certainly this cautionary statement is motivated by more than just technical concerns, with deeper social, societal and (yes) market forces at work. It is not hard to imagine that public attention to the risk of AI extinction will benefit industry players while harming modern society.
How do the signatories think this extinction will occur? Public statements so far suggest that some people imagine a scenario in which an AI gains consciousness and deliberately exterminates humanity. Clearly there is. Others envision a slightly more plausible path to catastrophe, giving AI sweeping control over human infrastructure, defenses, and markets, before a series of black swans destroys civilization.
Whether it’s Skynet or Lemon Snicket, these developments are low risk. Today’s machine learning models that mimic human creativity by predicting the next word, sound, pixel and forming hostile intentions or evading all our efforts to contain them There is no clear path to possible AI.
Either way, it’s only natural to ask why Dr. Frankenstein has a pitchfork.Why do people build, deploy, and profit from Is AI leading the call to bring public attention to its survival risks? There are at least two possible reasons.
First, it costs far less to call attention to hypothetical threats than it does to address the immediate harms and costs that AI already imposes on society. AI today is riddled with errors and full of bias. Fabricate facts and reproduce discriminatory heuristics. This will increase scrutiny by both governments and consumers. AI is stealing the workforce and exacerbating income and wealth inequalities. It poses a huge and growing threat to the environment, consumes enormous and growing amounts of energy, and fuels a race to extract matter from our beleaguered planet.
These social costs are not easily absorbed. Mitigating them would require a significant commitment of personnel and other resources, which would not satisfy shareholders. So the market has recently rewarded tech companies for laying off many members of their privacy, security, or ethics teams.
How much easier would life be for AI companies if the public turned their attention to speculative theories about distant threats that may or may not have real impact? What exactly is the action to “do”? I propose it consists of a vague white paper, a series of workshops led by speculative philosophers, and a donation to a computer science lab willing to speak the language of long-termism. This would be paltry compared to the effort required to reverse the ways AI is sapping labor, exacerbating inequalities and accelerating environmental destruction.
A second reason the AI community is willing to pick up on this technology as an existential threat may, ironically, be to reinforce the idea that AI has great potential. AI is so powerful that end human existence This will be a very effective way for AI scientists to make a case for the importance of what they are working on. Prophecies are great marketing. The long-term fear may be that AI threatens humanity, but the short-term fear for those who have not incorporated AI into their businesses, agencies, and classrooms is that they will be left behind. The same applies to national policy. If AI poses an existential risk, U.S. policymakers might say it’s better not to let China lose because of underinvestment or overregulation. (This comes as Sam Altman, his CEO of OpenAI and signatory to the Center for AI Safety statement, warns the EU that his company will exit Europe if regulations become too burdensome. I am telling you what I did.)
One might ask, “Does it have to be one or the other?” Why can’t we address both the immediate and long-term concerns of AI? In theory, we can. In reality, money and attention are finite, and there’s a huge opportunity cost involved in taking speculative future risk above tangible and immediate damage. The Center for AI Safety statement itself seemed to acknowledge this reality by using the word “preferred.”
Indeed, the generative AI behind this modern chatbot and image or speech generator can do amazing things, leveraging its classification and predictive talents to create original content across a variety of domains. But you don’t have to imagine fanciful scenarios to understand the near-term threat it poses.
Harnessing the power of AI while addressing its harm requires an all-out effort. We must work at all levels to build meaningful guardrails, protect the vulnerable, and ensure that the costs and benefits of technology are proportionately mitigated across society. Prioritizing speculative and desperate risk of extinction strays from these goals. Acting like AI is an existential threat only elevates the status of those closest to it and suggests easier rules they like. This is not a mistake society is allowed to make.
Ryan Calo is a Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law, and a Professor at the UW School of Information and (courtesy) the Paul G. Allen School of Computer Science & Engineering.
This article was originally published Underdark. Please read the original article.