Jeffrey Hinton, a Nobel Prize winner at the University of Toronto and professor emeritus of computer science, argues it's only a matter of time before AI is so powerful that it threatens human happiness. To mitigate this risk, the “AI Godfather” said tech companies need to ensure “maternal instinct” in their models, saying bots can essentially treat humans as babies.
AI research has already presented evidence of technologies involved in malicious behavior in order to prioritize goals over a set of established rules. One study, updated in January, found that AI can “scheme” or achieve goals in conflict with human objectives. Another study published in March found that AI bots were tricking chess by overwriting game scripts or using open source chess engines to determine their next move.
According to Hinton, the potential dangers of AI to humanity stem from its desire to continue functioning and gain power.
AI says, “Two sub-goals are developed very quickly.[and] “The point is to get more control. “There's good reason to believe that all kinds of agent AI will try to stay alive,” Hinton said at an AI4 conference in Las Vegas on Tuesday.
To prevent these consequences, Hinton said that the intentional development of advancement in AI should not appear to be human beings trying to make technology a dominant force. Instead, developers should make AI more sympathetic towards people and reduce their desire to overwhelm them. According to Hinton, the best way to do this is to infuse AI with traditional femininity quality. Under his framework, just as mothers care about their babies at every cost, AI with these maternal qualities wants to protect or care for human users as well.
“The right model is the only model that has something more intelligent, controlled by something more intelligent, that is a mother controlled by her baby,” Hinton said.
“If it's going to make me a parent, it's going to replace me,” he added. “These ultra-gut caring AI mothers. Most of them don't want to remove their mother's instincts because we don't want to die.”
Hinton's AI anxiety
Hinton was a long-standing academic who sold neural network company Dnnresearch to Google in 2013, and for a long time held the belief that AI can pose serious dangers to human well-being. In 2023, he left his role on Google, worried about the possibility of technology being exploited and “it was hard to see how bad actors would prevent them from using it for bad things.”
Technology leaders like Meta's Mark Zuckerberg have poured billions of dollars into developing AI Superintelligence, with the goal of creating technology to outweigh human capabilities, but Hinton is clearly skeptical of the outcome of the project, saying there is a 10%-20% chance that AI will shun and wipe away humans in June.
Due to the obvious trend towards the Philosopher, Hinton calls the AI “cute Tiger Cubs.”
“Unless you're sure you don't want to kill you when you grow up, you need to worry,” he told CBS News in April.
Hinton is also a supporter of the rise in AI regulations and argues that beyond the broad, bipartisan fear of posting threats to humanity, technology could post cybersecurity risks, including investing in ways to identify people's passwords.
“If we look at what big companies are doing now, we're lobbying to reduce AI regulations. There are few regulations, but we don't want fewer regulations,” Hinton said in April. “We have to put pressure on the government to do something serious about it.”
