Experts call for new approach to AI, ‘civilization-ending’ technology

AI Video & Visuals


This article has been reviewed in accordance with Science X’s editorial processes and policies. The editors highlight the following attributes while ensuring content authenticity:

fact-checked

trusted source

Proofreading






Credit: Pixabay/CC0 Public Domain

Stuart Russell, professor of computer science at the University of California, Berkeley, and AI guru, says AI technology has the power to change the world. It could improve the quality of life for people across the planet or destroy civilization, he said. , urged countries to regulate AI to ensure that it advances human interests.

Russell said, “Intelligence means the power to shape the world for our own benefit. Creating systems that are more intelligent than humans, individually or collectively, creates entities that are more powerful than we are.” We will have to,” Russell said. CITRIS Research Exchange and Berkeley AI Research Lab. “How can we forever retain power over entities more powerful than us?”

“If you pursue [our current approach], then eventually lose control of the machine. But we can take another route that actually leads to AI systems that are beneficial to humans.”In fact, we can have a better civilization.”

The release of chatbots like ChatGPT provided a lens for the public to learn what AI can do and its future opportunities and dangers. But given its world-changing potential and his $13.5 trillion, which Russell describes as a “conservative” estimate of the value creation expected from his AGI, slowing down would be a risky request. increase.

Existing AI systems like ChatGPT work in black boxes, Russell said.It is unclear whether these tools have goals of their own, whether those goals match ours, or whether they can pursue goals. example new york times Reporters who have rejected the bot’s advances have suggested they might be able to, he said.

Instead, AI should be designed to promote human interests, recognize that we are unaware of what those interests are, and look for evidence to identify and act on those interests. Yes. This requires rethinking AI concepts such as planning, reinforcement learning, and supervised learning that rely on prior knowledge of objectives. It also needs to be developed in a “grounded” way with a rigorous understanding of all components and how they work together. This will allow us to predict how these systems will behave, he said.


Professor Stuart Russell gave a talk titled “How AI Can’t Destroy the World” at a lecture hosted by the CITRIS Research Exchange and the BAIR Lab at the University of California, Berkeley.Credits: Video by CITRIS and Banatao Institute

“I don’t know of any other way to get enough confidence in the operation of these systems,” said Russell.

He said that even if the technology is built on this foundation, there should be rules prohibiting the release of unsafe AI.

The last “civilization-ending technology,” according to Russell, nuclear power, has seen rigorous control and meticulousness on behalf of its engineers. regulated with caution. So should AI, he said.

According to Russell, an international legal framework already exists that describes what responsible AI is and sets out related recommendations. Developers must be able to demonstrate that AI is robust, predictable, and does not pose undue risk to society before deployment. Businesses should adhere to these principles and the state should turn them into rules, he said.

“Systems with internal working principles that you don’t understand, systems that may or may not have their own internal goals they pursue, and what you call the ‘Spark of AGI’. We shouldn’t deploy systems that claim to show.” A paper written by a Microsoft researcher who claims that OpenAI’s GPT-4 shows the “spark of artificial general intelligence.”

“If you believe there is a spark of AGI, it is a technology that has the potential to completely change the face of our planet and civilization,” said Russell. “Why can’t you take it seriously?”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *