When noted computer scientist and Turing Award winner Geoffrey Hinton left Google over his concerns that AI technology would spiral out of control and be dangerous to humans, it caused a frenzy in the tech world. rice field.
Having worked part-time at Google for over a decade, Hinton is known as the “Godfather of AI.” A pioneer in AI, he has made significant contributions to the development of machine learning, deep learning, and backpropagation techniques (the process of training artificial neural networks).
in his own words
While Hinton attributed some of his decision to retire on May 1 to his age, the 75-year-old also said he regrets some of his contributions to artificial intelligence. Stated.
During a Q&A session at MIT Technology Review’s EmTech Digital 2023 conference on May 3, Hinton said he changed his mind about how AI technology works. He said he now believes AI systems are far more intelligent than humans and have the potential to be better learners.
“Something like GPT-4 knows a lot more than we do,” Hinton said, referring to the lab’s latest version of OpenAI’s large-scale language model. “They have some sort of common sense knowledge about everything.”
He said the more technology learns about humans, the better it can manipulate them.
Hinton’s concerns about the risks of AI technology echo those of other AI leaders who recently called for a pause in AI development.
Computer scientists don’t believe a suspension is possible, but the risks of AI technology and its use by criminals and other wrongdoers, especially those who use it for harmful political purposes. He said the risk of abuse could be dangerous to society.
“What we want is some way to get them to do something useful for us, even if they’re smarter than us,” Hinton said. In a world with reluctant villains, we have to try.”
AI competition and the need for regulation
Hinton clarified that his decision to leave Google was not due to any particular irresponsibility on the part of the tech giant regarding AI technology, but that the computer scientist joined a group of prominent Google employees. to sound the alarm about AI technology.
Last year, former Google engineer Blake Lemoine claimed that the vendor’s AI chatbot LaMDA is aware, conducts spontaneous conversations, and has human emotions. Lemoine also said that after Google provided the data, it acted cautiously and delayed development.
While one might think that Google has a fair share of responsibility in its AI efforts, the pace at which major technology vendors, particularly Google’s biggest rival Microsoft, have introduced new AI systems Azure and integration into office applications, etc.) has allowed Google to scramble faster. It became a desperate AI race.
But the frenetic pace at which both Google and Microsoft are moving is too fast to assure enterprise and consumer users of AI technology that AI innovations are safe and ready to be used effectively. maybe.
“They’re releasing things at a fast pace without doing enough testing,” said Chirag Shah, a professor of information science at the University of Washington. “We have no regulations. We have no checkpoints. There is nothing that can stop them from doing this.”
But the federal government is eyeing the issue of AI and generative AI technology.
On May 4, the Biden administration invited the CEOs of AI vendors Microsoft, Alphabet, OpenAI and Anthropic to discuss the importance of responsible and trusted innovation.
The administration also said developers from leading AI companies such as Nvidia, Stability AI and Hugging Face will participate in the public evaluation of AI systems.
But Shah said the technology is risky, especially since generative AI is a self-learning system, with an almost complete lack of checkpoints and regulations.
If generative AI systems are unregulated and unchecked, they can lead to disaster, primarily when people with malicious political intent or criminal hackers exploit the technology.
“These things are rapidly getting out of our hands, so bad guys can do things, or this technology can act on its own, and we can’t stop it. It’s only a matter of time before it becomes,” Shah said. For example, malicious actors could use generative AI to defraud, provoke terrorist attacks, or perpetuate and instill bias.
But like many technologies, mass adoption will lead to regulation, said Usama Fayyad, professor and executive director of Northeastern University’s Institute for Experiential AI.
And since OpenAI’s launch last November, ChatGPT has amassed over 100 million people, most of whom use it routinely like other popular AI tools like Google Maps and Translate. I don’t rely on it and use it only occasionally. Faiyad said.
“You cannot regulate before you understand the technology,” he continued. Regulators have not yet fully understood the technology and are therefore unable to regulate it.
“Like cars and guns and many other things, [regulation] Fayyad said: “The more important the technology becomes, the more likely it will be regulated.”
Therefore, regulations are likely to apply once AI technology is embedded in all applications, enabling most knowledge workers to do their jobs faster, Fayyad said.
Intelligence of AI technology
Fayyad added that just because it “thinks” faster doesn’t mean AI technology will be more intelligent than humans.
“We believe that only intelligent people can speak eloquently and speak fluently. We mistake fluency and eloquence for intelligence.”
Large language models are programmed to tell stories because they follow probabilistic patterns (that is, they follow common conventions, but with some randomization). In other words, you may end up telling the wrong story. Moreover, their nature is to want to be seen as smart, which can make them appear smarter than they actually are, Fayyad said.
Moreover, the fact that machines are better at specific tasks doesn’t mean they’re smarter than humans, said Sarah Creps, a professor in the government department and an adjunct law professor at Cornell University. I’m here.
“What humans excel at is more complex tasks that combine multiple cognitive processes with empathy, adaptation, and intuition,” said Krebs. “It’s hard to program machines to do these things, and that’s what’s behind the elusive Artificial General Intelligence (AGI).”
AGI is software with common human cognitive abilities (not yet formally in existence) that could, in theory, be able to perform any task a human can perform.
next step
Hinton claims that he is trying to bring this issue to the forefront to encourage people to find effective ways to confront the risks of AI.
Krepps, meanwhile, said Hinton’s decision to speak up now, decades after he first worked on the technology, may seem hypocritical.
“He should have seen where the technology was going and how fast it was going,” she said.
On the other hand, Hinton’s position could make people more cautious about AI technology, she added.
For AI to be effective, Shah said, users need to be transparent and accountable. “There will also need to be an impact on those who abuse it,” he said.
“We have to come up with an accountability framework,” he said. “There’s still some harm, but if we can control a lot of it, we can mitigate some of the issues much better than we do now.”
for Hintonthe best thing is to enable the next generation to use AI technology responsibly.
“What people like Hinton can do is help create a set of norms for the proper use of these technologies,” Krebs said. We can deprecate AI and contribute to guardrails that can mitigate the risks of AI.”
Esther Ajao is a news writer covering artificial intelligence software and systems.