- AI expert Stuart Russell has repeatedly said that the expansion of AI needs to be paused before humanity loses control.
- in an interview with business todayRussell likens the unregulated AI threat to the potential event of Chernobyl.
- Leaders are asking AI creators to secure their AI systems before releasing them to the public.
Stuart Russell knows AI. And he is concerned about its uncontrolled growth.In fact, he is so worried that in an interview business today, He says uncontrollable artificial intelligence could be “Chernobyl for AI.”
It has the potential to change lives beyond our current understanding.
A professor of computer science at the University of California, Berkeley, Russell has spent decades as a leader in the AI field. He also signed an open letter, along with celebrities such as Elon Musk and Steve Wozniak, calling for a moratorium on developing powerful AI systems, defined as more powerful than OpenAI’s GPT-4.
“Powerful AI systems should only be developed when we are confident that their effects will be positive and the risks manageable,” the letter said. “Society is pausing the use of other technologies that can have a devastating impact on society. We can do that here.”
Proponents of the letter say no comparable level of planning and control is happening in the AI field as the technology’s potential represents a “major shift in the history of life on Earth.” The signatories continue the AI Lab’s “uncontrollable race to develop and deploy ever-more-powerful digital minds that no one, not even their creators, can understand, predict, or reliably control.” This is especially true because
Left alone, that kind of development could lead to ‘Chernobyl for AI,’ Russell says. business todayThe 1986 nuclear disaster in Ukraine has had a negative impact on people’s lives for over 35 years.
“What we are asking is to create reasonable guidelines [sic]’ he says. “You have to be able to convincingly demonstrate that the system can be safely released, and you have to show that the system meets those guidelines. Then we have to prove that it is safe, that it can withstand earthquakes, that it doesn’t explode like Chernobyl.”
According to Russell, creating new AI systems could have devastating effects on planes that are expected to fly hundreds of passengers safely, and the world around them if even the slightest thing goes wrong. It’s not that different from creating a nuclear power plant with
AI is just as powerful, and even leaders don’t know what the AI catastrophe will look like. But they want us to never be found.
Tim Newcomb is a journalist based in the Pacific Northwest. He has covered stadiums, sneakers, gear, infrastructure and more in various publications including Popular Mechanics. Some of his favorite interviews include roundtables with Roger Federer in Switzerland, Kobe his Bryant in Los Angeles and Tinker his Hatfield in Portland.