What is one thing to protect AI from complete AGI? Consistency.
In an episode of the “Google for Developers” podcast released Tuesday, Google Deepmind CEO Demis Hassabis said advanced models like Google's Gemini are still stumbling on issues that most school kids can solve.
“It shouldn't be that easy for the average person to find minor flaws in the system,” he said.
He pointed to a Gemini model that was reinforced with a deep mindset that allows them to win gold medals at the International Mathematics Olympiad, the world's most prestigious mathematics competition.
But these same systems “can still make simple mistakes in high school math,” he said.
“Some dimensions, they are really good. Other dimensions, their weaknesses can be exposed very easily,” he added.
Hassabis' position coincides with Google CEO Sundar Pichai. SundarPichai is currently known as the current development stage “Aji”. Pichai used the term in an episode of Lex Fridman's Podcast that aired in June to describe a system that is good in some regions but fails in others.
Hassabis said it requires more than expanding data and computing to resolve AI contradictions. “Some of the lacking capabilities of inference and planning in memory” must still be cracked, he added.
He said the industry also needs better testing and “new, more difficult benchmarks” to accurately determine which models are good and what they don't.
Hassabis and Google did not respond to requests for comment from Business Insider.
Big Tech does not crack AGI
Large tech players like Google and Openai are working to achieve AGI, the theoretical threshold that AI can reason like human.
Hassavis said in April that the AGI would arrive “in the next five to 10 years.”
AI systems remain prone to hallucinations, misinformation, and basic errors.
Openai CEO Sam Altman had a similar take prior to the launch of the GPT-5 last week. While calling his company's model a significant advancement, he told reporters it hadn't yet reached a true AGI.
“This is clearly an intelligent model in general, but I think that in the way most of us define AGIs, we still lack something very important, or very important,” Altman said during the report.
Altman added that one of these lacking elements is the ability of models to learn independently.
“One big thing is that this is not a model that you continuously learn when it is unfolding from the new ones it finds.

