Experts fear future AI could cause a ‘nuclear-level catastrophe’

AI News


Nearly three-quarters of researchers believe artificial intelligence “could soon bring about revolutionary social change,” while 36% believe AI decisions “could cause a nuclear catastrophe.” I am concerned that

These findings are included in the 2023 AI Index report. This is an annual assessment of the fastest growing industry, compiled by the Stanford University Human-Centric Artificial Intelligence Lab and published earlier this month.

“These systems have demonstrated capabilities for question answering, text, image, and code generation that were unimaginable a decade ago, beating the state-of-the-art on many benchmarks, old and new,” they report. the book says. “However, they are prone to hallucinations, are routinely biased, and can be tricked into serving nefarious purposes, highlighting the complex ethical challenges associated with deployment.”

As al jazeeraThe analysis said, “Calls for regulation of AI grow after controversy ranging from chatbot-related suicides to deepfake videos of Ukrainian President Volodymyr Zelensky appearing to surrender to invading Russian forces. It was reported that it took place inside

Specifically, the survey measured the opinions of 327 experts in natural language processing (a field of computer science essential to developing chatbots) and found that the November release of OpenAI’s ChatGPT “changed the world of technology.” swept away,” reported the news agency.

“If superintelligent AGI is misaligned, it could cause serious harm to the world.”

Just three weeks ago, Geoffrey Hinton, considered the “godfather of artificial intelligence,” said:CBS newsBrook Silva-Braga says the potential impact of rapidly advancing technology is comparable to “the industrial revolution, electricity, or perhaps the wheel.”

Asked about the technology’s potential to “wipe out humanity,” Hinton warned, “It’s not unthinkable.”

Its amazing potential does not necessarily lie in existing AI tools such as ChatGPT, but rather in so-called “artificial general intelligence” (AGI), which involves computers developing and executing their own ideas. It is in.

“Until very recently, we thought it would be 20 to 50 years before we had general-purpose AI,” Hinton said. CBS news“I think it might be 20 years or less now.”

Pressed by Silva-Braga that it could happen sooner, Hinton admitted he didn’t rule out the possibility that AGI could happen within five years. This is “a big change from years ago when we said ‘impossible’.”

“We have to really think about how to control it,” Hinton said. Asked if it would be possible, Hinton said, “I don’t know. I haven’t been there yet, but I can try.”

AI pioneers are never alone. A survey of computer scientists conducted last year found that 57% said “recent advances are pushing us towards AGI” and 58% agreed that “AGI is an important concern.” .

In February, OpenAI CEO Sam Altman wrote in a company blog post:

An open letter published two weeks ago, with more than 25,000 signatures, calls for a six-month moratorium on training AI systems beyond the level of OpenAI’s latest chatbot, GPT-4. but Altman is not among them.

“Powerful AI systems should only be developed if we are convinced that their effects are positive and the risks are manageable,” the letter said.

F.financial timesreported Friday that Tesla and Twitter CEO Elon Musk, who signed a letter calling for a moratorium, is “planning to launch a new artificial intelligence startup to compete with OpenAI.”

“It’s very reasonable that people are worried about these issues now.”

Regarding AGI, Hinton said:

AGI may still be years away, but existing AI tools (chatbots that tell lies, face-swapping apps that generate fake videos, cloned voices that defraud, etc.) There are already growing concerns that it is trying to speed up the spread.

According to a 2022 IPSOS poll of the general public included in a new Stanford report, Americans are particularly wary of AI, believing that “the benefits of AI-powered products and services outweigh the drawbacks.” Only 35% did, compared to 78%. 76% in China, 76% in Saudi Arabia and 71% in India.

Amid “increasing regulatory interest” in AI’s “accountability mechanisms,” the Biden administration this week issued a We are seeking public input on measures that can be taken in the future.” .”

Axiosreported Thursday that Senate Majority Leader Chuck Schumer (DN.Y.) is “taking early steps toward legislation to regulate artificial intelligence technology.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *