Stuart Russell calls for a new approach to AI, a ‘civilization-ending’ technology

AI News

This technology has the power to change the world, says Stuart Russell, a computer science professor and AI guru at the University of California, Berkeley. It could improve the quality of life for people across the planet or destroy civilization, he said. , urged countries to regulate AI to ensure it advances human interests.

Russell said, “Intelligence means the power to shape the world for our own benefit. Creating systems that are more intelligent than humans, individually or collectively, creates entities that are more powerful than we are.” I will do it,” he said. lecture Hosted by CITRIS Research Exchange and Berkeley AI Research Lab. “How can we forever retain power over entities more powerful than us?”

“If you pursue [our current approach], then eventually lose control of the machine. But we can take another route that actually leads to AI systems that are beneficial to humans,” Russell said. “In fact, we could have had a better civilization.”

The release of chatbots like ChatGPT provided a lens for the public to learn what AI can do and its future opportunities and dangers. But with the opportunity to change the world,13.5 trillion Russell describes it as a “lowball” estimate of the value creation expected from AGI.

Stuart Russell

Professor Stuart Russell is a Professor of Computer Science at the University of California, Berkeley.

Existing AI systems like ChatGPT work in black boxes, Russell said.It is unclear whether these tools have goals of their own, whether those goals match ours, or whether they can pursue goals. I confessed my love many times To new york times Reporters who have rejected the bot’s advances have suggested they might be able to, he said.

Instead, AI should be designed to promote human interests, recognize that they are unaware of what those interests are, and look for evidence to identify and act on those interests. This requires rethinking AI concepts such as planning, reinforcement learning, and supervised learning that rely on prior knowledge of objectives. It also needs to be developed in a “well grounded” manner with a rigorous understanding of all components and how they work together. This will allow us to predict how these systems will behave, he said.

“I don’t know of any other way to get enough confidence in the operation of these systems,” said Russell.

He said that even if the technology is built on this foundation, there should be rules prohibiting the release of unsafe AI.

The last “civilization-ending technology,” according to Russell, is subject to strict control and close attention on behalf of its engineers. It is meticulously regulated. So should AI, he said.

there is already international legal framework Russell said he explains what responsible AI is and lays out related recommendations. Developers must be able to demonstrate that AI is robust, predictable, and does not pose undue risk to society before deployment. Businesses should adhere to these principles and the state should turn them into rules, he said.

“You shouldn’t deploy a system with internal working principles that you don’t understand. It may or may not have its own internal goals they pursue. And You claim to show ‘a spark of AGI,'” he said. recent papers OpenAI’s GPT-4 has been published by Microsoft researchers who claim it shows the “spark of artificial general intelligence”.

“If we believe we have the spark for AGI, it’s a technology that has the potential to completely change the face of the planet and civilization,” Russell said. “Why can’t you take it seriously?”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *