SAN FRANCISCO — Known as the “Godfather of Artificial Intelligence,” award-winning computer scientist Jeffrey Hinton is seriously rethinking his work.
Hinton has helped pioneer AI technologies that are essential to a new generation of highly functional chatbots such as ChatGPT. But in a recent interview, he recently said he quit his high-profile job at Google to share his concerns that uncontrolled AI development could pose a danger to humanity. says.
“I suddenly changed my mind about whether these things could be more intelligent than us,” he said in an interview with MIT Technology Review. “I think they’re very close to it now. In the future, they’re going to be much smarter than us.. How are we going to survive that?”
Hinton isn’t the only one concerned. Shortly after Microsoft-backed startup OpenAI released his latest AI model, called GPT-4, in March, more than 1,000 researchers and engineers have taken his six-month pause in AI development. I have signed the letter of request. society and humanity. “
Let’s take a look at Hinton’s biggest concern about AI and the future of humanity.
The human brain is able to solve calculus equations, drive cars, and play characters in “Succession” thanks to its innate ability to organize and store information and reason about solutions to intractable problems. can be tracked. Our skulls are packed with about 86 billion neurons, and more importantly, her 100 trillion connections that these neurons make with each other make that possible.
By contrast, ChatGPT’s underlying technology features between 500 billion and 1 trillion connections, Hinton said in an interview. That seems like a big disadvantage compared to us, but Hinton said that OpenAI’s newest AI model, he GPT-4, knows “hundreds of times more” than any human being. I’m here. Perhaps it has “much better learning algorithms” than ours, so it can do cognitive tasks more efficiently, he suggests.
Researchers have long noted that training artificial neural networks requires both vast amounts of energy and data, so they take much longer than humans to absorb and apply new knowledge. I’ve been Hinton notes that systems like GPT-4, when properly trained by researchers, can learn new things very quickly, arguing that is no longer the case. This is similar to how trained professional physicists can get new experimental results into their brains much more quickly than the typical high school science student.
This led Hinton to conclude that AI systems may already be outsmarting us. Not only can AI systems learn things faster, they can also share copies of knowledge with each other almost instantly.
“It’s a completely different form of intelligence,” he told the publication. “A new and better form of intelligence.”
What would AI systems smarter than humans do? One of the terrifying possibilities is that malicious individuals, groups, or nations could use them to achieve their own ends. is. Hinton is particularly concerned that these tools could be trained to sway elections and even cause wars.
For example, an election misinformation spread via an AI chatbot could become a future version of an election misinformation spread via Facebook and other social media platforms.
And it may just be the beginning. “Don’t think that Putin won’t build a super-intelligent robot meant to kill Ukrainians,” Hinton said in the article. “He didn’t hesitate.”
It is not clear how and who can stop a power like Russia from using AI technology to dominate its neighbors and its own people. Hinton suggests that a global agreement similar to the 1997 Chemical Weapons Convention could be a good first step toward establishing international rules against weaponized AI.
The Chemical Weapons Agreement didn’t stop investigators from finding likely Syrian attacks using chlorine gas and the nerve agent sarin against civilians during the bloody civil war of 2017 and 2018. It is also worth noting that
David Hamilton, AP Business Reporter
