Will AI really destroy humanity?

AI News


Warnings come from all angles. Artificial intelligence poses an existential threat to humanity and must be caught before it’s too late.

But what are these disaster scenarios, and how will machines end up destroying humanity?

– Destiny Paperclip –

Also read: Looking for a smartphone? To check the mobile finder

Most disaster scenarios start at the same place. Machines surpass human capabilities, escape human control, and refuse to turn off.

“Once we have machines with the purpose of self-preservation, we’re in trouble,” AI scholar Yoshua Bengio said at an event this month.

But these machines don’t exist yet, so imagining how they could destroy humanity is often left to philosophy and science fiction.

Philosopher Nick Bostrom has written about the “intelligence explosion” that occurs when superintelligent machines begin to design their own machines.

He illustrated the idea with a story about a super-intelligent AI in a paperclip factory.

The AI ​​was given the ultimate goal of maximizing the paperclip’s output, and “proceeds from converting first the Earth and then an ever-larger portion of the observable universe into a paperclip.”

Bostrom’s ideas have been dismissed as science fiction by many, especially since he separately claims that mankind is a computer simulation and supports a theory close to eugenics.

He also recently apologized after the racist messages he sent in the 1990s came to light.

However, his thoughts on AI were highly influential and inspired both Elon Musk and Stephen Hawking.

– Terminator –

If superintelligent machines are to destroy humanity, they definitely need a physical form.

Arnold Schwarzenegger’s red-eyed cyborg sent from the future by AI to end human resistance in the movie The Terminator turned out to be a particularly captivating image for the media.

However, experts have spoiled this idea.

“This sci-fi concept is unlikely to become a reality in the coming decades, if at all,” the Stop Killer Robot Movement wrote in its 2021 report.

But the group warns that giving machines the power to make life-or-death decisions is an existential risk.

Kerstin Dowtenhahn, a robotics expert at the University of Waterloo in Canada, downplayed those concerns.

She told AFP that AI is unlikely to give machines advanced reasoning abilities or a desire to exterminate humans.

“Robots aren’t evil,” she said, but admitted that programmers could make them do evil things.

– More Dangerous Chemicals –

A less obvious sci-fi scenario depicts “villains” using AI to create toxins and new viruses and unleash them into the world.

Large language models like GPT-3, which was used to create ChatGPT, turned out to be very good at inventing terrifying new chemicals.

A group of scientists who used AI to discover new drugs conducted experiments to fine-tune the AI ​​to search for harmful molecules instead.

They managed to generate 40,000 potentially toxic substances within six hours, as reported in the journal Nature Machine Intelligence.

Joanna Bryson, an AI expert at the Hartie School in Berlin, said she could imagine someone devising a way to spread a poison like anthrax more quickly.

“But it’s not a survival threat,” she told AFP. “It’s just a terrible, terrible weapon.”

– Overtaken Species –

Hollywood rules dictate that epoch-making disasters must be sudden, huge, and dramatic, but what if the end of mankind wasn’t slow, quiet, and decisive?

“In the worst case, our species may end without an heir,” says philosopher Hugh Price in a promotional video for Cambridge University’s Center for Existential Risk Studies.

But he said there were “not so bleak chances” that humanity, enhanced by advanced technology, could survive.

“A purely biological species will eventually come to an end in that there are no humans around who do not have access to this enabling technology,” he said.

Imaginary apocalypse is often framed in terms of evolutionary theory.

Stephen Hawking told the BBC in 2014 that it could “mean the end of the human race”, arguing that humans will eventually be unable to compete with AI machines.

Jeffrey Hinton, who has spent his career building machines that resemble the human brain, most recently at Google, speaks in similar terms of “superintelligence” that simply surpasses humans.

He recently told US broadcaster PBS that “humans may be just a transient stage in the evolution of intelligence.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *