Scientists warn of AI dangers, but disagree on solutions

AI For Business


CAMBRIDGE, Massachusetts (AP) — A computer scientist who helped build the foundation of today’s artificial intelligence technology warns of the dangers, but it’s not clear what those dangers are or how to prevent them. It doesn’t mean they agree on the method.

Jeffrey Hinton, known as the Godfather of AI, said at a conference at the Massachusetts Institute of Technology on Wednesday that human survival is threatened if “the smart ones can outwit us.”

After leaving Google to speak more freely, Hinton, 75, said he recently changed his view on the reasoning power of computer systems, a field he spent his life researching.

Addressing an audience attending the MIT Technology Review’s EmTech Digital conference from his home via video, Hinton said, “These things are like every novel and Machiavellian ever. You will learn from us by reading all that we have written.” “Even if they can’t pull the lever directly, they can certainly get us to pull the lever.”

“I wish there was an easy and nice solution to this, but there isn’t,” he added. “I don’t know if there is a solution.”

political cartoons

Yoshua Bengio, an AI pioneer and co-winner of the highest computer science award with Hinton, told The Associated Press on Wednesday that Hinton’s concerns raised by chatbots such as ChatGPT and related technologies “are largely aligned. but it is not helpful to simply say, “We are destined.”

“The main difference is that he’s a pessimist and I’m more optimistic,” says Bengio, a professor at the University of Montreal. And I think it needs to be taken seriously not only by a few researchers, but also by the government and the public.”

There are many signs that the government is listening. The White House has invited her CEO of Google, Microsoft and ChatGPT maker OpenAI to meet with Vice President Kamala Harris on Thursday. Officials say they are having candid discussions about how to mitigate both short-term and long-term risks. their technology. European lawmakers are also accelerating negotiations to pass comprehensive new AI rules.

But all the talk about the most dire future perils has gone from the hype around superhuman machines that don’t yet exist to distractions from attempts to put practical safeguards on today’s largely unregulated AI products. Some fear that we are distracting

Margaret Mitchell, former leader of Google’s AI ethics team, says Hinton didn’t speak up during the decade she was in power at Google, especially after the ouster of prominent black scientist Timnit Gebble in 2020. said he was angry with Before being widely commercialized in products such as ChatGPT and Google’s Bard, large-scale language models

“It’s a privilege to be able to escape the reality of all these issues, the current epidemic of discrimination, the epidemic of hateful language, the harmful and non-consensual women’s pornography,” Mitchell also said after Gebble’s departure from Google. A person who was kicked out of “He’s skipping them all to worry about the future.”

Yann LeCun, a third researcher working at Bengio, Hinton, and Facebook’s parent company Meta, was named in 2019 for his breakthroughs in the field of artificial neural networks, which have contributed to the development of today’s AI applications such as ChatGPT. Won a Turing Award.

Bengio, the only one of the three who did not get a job at a tech giant, has spoken out about AI’s near-term risks, including job market instability, automated weapons and the dangers of skewed data sets. I have voiced my concerns for years.

But those concerns have grown in recent times, with Bengio joining other computer scientists and technology business leaders such as Elon Musk and Apple co-founder Steve Wozniak in developing a more powerful AI than OpenAI’s latest model, GPT-4. I’m asking to stop development of the system for 6 months. .

“This is a milestone that can have dramatic consequences if not paid attention to,” says Bengio. “My main concern is how they can be exploited for malicious purposes, cyberattacks and disinformation to destabilize democracy. You can think of it as interacting with.It’s hard to find them.”

What researchers are unlikely to agree on is how current AI language systems (with many limitations, such as their propensity to fabricate information) can actually be smarter than humans.

Aidan Gomez was one of the co-authors of a pioneering 2017 paper that introduced a so-called transformation technique (the “T” at the end of ChatGPT) to improve the performance of machine learning systems, especially how they learn from passages. was. of text. Gomez, 20, who was an intern at Google at the time, remembers lying on a couch at the California headquarters when his team sent out a newspaper around 3 a.m. when it was due. .

“Aidan, this is going to be huge,” he remembers a colleague telling him about the work leading to new systems that can generate human-like prose and imagery.

Six years later, Gomez, CEO of his own AI company called Cohere, which Hinton invested in, is enthusiastic about the potential applications of these systems, but “detached from reality” of their true capabilities. , “Depends on. Based on extraordinary leaps of imagination and reasoning.”

“The idea that these models somehow have access to nuclear weapons and initiate some kind of extinction-level event is not a productive discussion,” Gomez said. It is detrimental to the actual and realistic policy efforts that are being made.”

Copyright 2023 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *