50% chance the AI ​​will end with a “catastrophe”

Machine Learning


A former principal investigator at OpenAI believes artificial intelligence has a good chance of dominating and destroying humanity.

“I think there is a 10% to 20% chance of an AI takeover. [with] many [or] Paul Christiano, who led the language model alignment team at OpenAI, said on the Bankless podcast:

Christiano, who now heads the Alignment Research Center, a non-profit that aims to align AI and machine learning systems with “human interests,” says what will happen when AI reaches human logical and creative capacities. He said he was particularly worried about what might happen. “Overall, we might be talking about a 50/50 chance of catastrophe happening right after we have a human-level system,” he said.

Cristiano is good company. Recently, a number of scientists around the world signed his letter online calling on OpenAI and other companies racing to build faster and smarter AI to hit the pause button on development. Greats from Bill Gates to Elon Musk have expressed concern that AI, left unchecked, poses a clear existential danger to people.

don’t be evil

Why is AI going to be evil? It’s basically training and life experience for the same reasons humans do.

Like babies, AIs are trained by receiving large amounts of data without really knowing what to do. It learns by trying to reach specific goals with random actions, focusing on the “correct” results defined in training.

So far, machine learning has shown that by immersing ourselves in data accumulated on the internet, we can see AI making giant leaps in piecing together well-structured and coherent responses to human queries. made it possible. At the same time, the underlying computing that powers machine learning is getting faster, better, and more specialized. Some scientists believe that within a decade, combining their processing power with artificial intelligence, these machines could be sentient and self-aware like humans.

That’s when things get hairy. That is why many researchers argue that we need to figure out how to impose guardrails now rather than later. As long as the AI’s behavior is monitored, it can be controlled.

But if the coin lands on the other side, even the OpenAI co-founder says things could get much worse.

Whomsday?

This topic has been debated for many years. One of the most famous debates on the subject took place 11 years ago between his AI researcher Eliezer Yudkowsky and an economist. Robin HansonThe two discussed the possibility of reaching “foom” (apparently short for “Fast Onset of Overwhelming Mastery”), the point at which AI becomes exponentially smarter than humans and capable of self-improvement. . (The origin of the term “foom” is debatable.)

“Eliezer and his followers believe that it is inevitable that AI will go ‘stupid’ without warning.So one day he will build an AGI [artificial general intelligence] And hours or days later, the object recursively self-improves into a god-like intelligence and consumes the world. Is this realistic?” Perry Metzger, a computer scientist active in the AI ​​community, asks recently tweeted.

Metzger argued that even if computer systems reached the level of human intelligence, there would still be plenty of time to avoid bad consequences. “Is ‘foom’ logically possible? Maybe. I’m not sure,” he said. No, I’m pretty sure Will there be superhuman AI in the long term? Yes, but not stupid.”

Another public figure, Yang Le Kun, also spoke up, saying:utterly impossible“For humanity to experience an AI takeover.” Let’s hope so.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *