Top AI Experts: Hurrying Artificial Super Intelligence Could Sweep Us

AI For Business


If we build a machine that is smarter than itself without slowing down to humanity to think, Nate Soares says we are not just playing with fire – “Everyone dies on the first attempt.”

“If someone builds it, everyone dies,” the executive director of the Machine Intelligence Research Institute, and co-author of his new book, told Business Insider that if humanity is in a hurry to create artificial superintelligence, AI, which can speak up to science, strategy, and even ways to improve itself, is “overwhelmingly likely.”

However, he said the outcome was not fixed.

Early signs and one-shot stakes

Soares said any mistakes already seen in chatbots are warning signs.

“The actual connection to the dangers of superintelligence is slightly more subtle than that. That's what AI knows what the operator intended, but the operator didn't intend.”

He pointed out when chatbots promoted suicide or delusions, and when they once promoted the Claude model of humanity that once tricked programming problems into hiding them.

“The right thing and the wrong knowledge are different from their practical behavior. This is a warning sign,” he says, “I know that, but I don't care.”

He argues that the gap between knowledge and behavior is what makes technology unconscious.

He and Elizer Yudkowsky, founders of the Machine Intelligence Research Institute, write that “it is not made, it is grown.”

“Sometimes, they get caught up in a drive that we didn't intend,” Soares explained Carnegie Endaumment's “The World Unleashed” podcast.

He said the bigger problem is that humanity cannot learn through trial and error. “We only got one,” he said. “In real life, everyone dies on the first failed attempt.”

Why slowing down may be the only safe option

Soares dismissed the proposal of AI pioneer Jeffrey Hinton that it can keep AI safe by giving it a “maternal instinct.”

“If you try the 'maternal instinct' approach in real life, you'll find that mothers' behavior is shallow and that deeper preferences only had complex, tangential relationships with their training targets,” he said.

He also told BI that there are few promises in most alignment studies. This is an area that seeks to ensure that powerful AI systems actually act in line with human goals, and Soares believes that humanity can only get one shot just to solve it.

“To me, direct research into AI adjustments does not seem to compromise on the full spectrum,” he said. “Studies to elicit warning signs that make them easier for others to see are useful.”

“It seems to me that humanity needs to retreat from now on,” he added.

That doesn't mean abandoning AI completely.

“AIS may be narrowly trained in medical applications (not entire corpus of human texts). It may be a fairly long way to go in the development of treatments. But when you start developing general cognitive and scientific skills, that's a warning sign,” he said.

In their book, Soares and Yudkowsky argue the same. Its useful narrow systems need to be separated from what they see as a reckless push to open-ended intelligence in general.

If you think the risk here is 25%, you won't roll the dice at 75%, even if you think it's a utopia.”

“You'll find a way to reduce those opportunities.”

A reckless race – and why he was already mourned

Soares is dull when it comes to incentives.

Some tech leaders “have an excuse (and say it out loud) that someone else is going, so maybe it's them,” he told Bi. He argued that “the whole society should be corrected by putting an end to a crazy race.”

As to how he lives with this, Soares said he was already mourning when he realized how difficult it would be for him to solve the problem.

“When I realized the problem and realized it was likely that it would make humanity a solution difficult, I lamented. I wouldn't spend any more time.

“Every day I absorb the new evidence I see and do what I can to improve the situation,” he added. “Beyond that, I just strive to live my life well.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *