AI's Godfather warns that AI can quickly develop its own language and remove humans

AI News


Geoffrey Hinton, the man many call the godfather of AI, has issued yet another note. Speaking about one decision-making podcast, the Nobel Prize-winning scientist warned that artificial intelligence could soon develop its own personal language.

“Now, AI systems do what is called “the chain of thought” in English, so you can follow what it is doing,” Hinton explained. “But if they develop their own internal language to talk to each other, it becomes even more frightening.”

It can take AI to unknown and unsettled territory, he says. Machines have already demonstrated their ability to produce “bad” thoughts, and there is no reason to assume that those thoughts are languages that can always be tracked.

Hinton's words have weight. After all, he was a 2024 Nobel Physics recipient, and his early research into neural networks paved the way for today's deep learning models and large-scale AI systems. However, he says he wasn't entirely grateful for the danger until much later in his career.

“I should have realised much sooner what the final danger would be,” he admitted. “I always thought the future was far away, so I wish I had thought about safety early.” Now, that late realization encourages his advocacy.

One of Hinton's biggest fears lies in the way AI systems are learned. Unlike people who have to struggle to share their knowledge, digital brains can instantly copy and paste what they know.

“If 10,000 people learn something, if they all know it instantly, imagine it happens in these systems,” he explained on BBC News.

This collective, networked intelligence means that AI can scale learning at a pace that humans cannot match. Current models such as GPT4 are already outperforming humans when it comes to raw general knowledge. For now, reasoning remains our base, but its advantages, according to Hinton, are shrinking rapidly.

He's a vocalist, but Hinton says the others in the industry aren't that close. “Many people in large corporations are underestimating the risk,” he noted, suggesting that their personal concerns were not reflected in their official statements. One notable exception, he says, is Google Deepmind CEO Demis Hassabis.

Regarding Hinton's high profile exit from Google in 2023, he says it wasn't a protest. “I was 75 and left Google because I couldn't program effectively anymore, but when I left I was more free to talk about all these risks,” he said.

The government will deploy initiatives like the White House's new “AI Action Plan,” but Hinton believes regulations alone are not enough.

The real task he argues is to create an AI called “merciful” given that these systems may be thinking in a way that humans cannot immediately follow.

– end

Published:

Unnati Gusain

Published:

August 3, 2025



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *