In raw processing power, large language models outweigh infants. But when a team of researchers did mathematics, they found an incredible gap. If a human learns the language at the same speed as ChatGpt, it would take 92,000 years.
The calculations highlight a long-standing mystery. Children have mastered speech and grammar after years of everyday experience, but cutting-edge AI, trained in billions of words, continues to travel through basic nuances.
The new research conducted at the Max Planck Institute for Psycholinguisticch provides the most comprehensive explanation to date.
The study, led by developmental psychologist Caroline Roland and colleagues at the UK-based ESRC Lucid Center, suggests that the critical advantage is not the amount of data children receive, but the way they interact with it.
Their active, embodied, socially embedded learning engines may also hold next-generation artificial intelligence lessons.
How to research power in young children
Over the past decade, we have created numerous tools, including head-mounted eye trackers, wearable microphones, and machine vision scene analyzers to help scientists capture instantaneous snapshots of their childhood.
A typical, modern dataset might include everything that two-year-olds see, hear, grip, and boring on a normal afternoon.
What's missing is an integrated theory of how these multisensory streams are translated into grammar and vocabulary.
How senses shape speech
The framework they propose is based on several mutually reinforced principles.
The first is multisensory integration. Unlike text-bound chatbots, babies process language against rich backgrounds of sights, sounds, tastes, smells and textures.
For example, a spoonful of mashed bananas are accompanied by color, smell, shape, temperature and audio labels from the caregiver.
Over time, these correlated cues help infants to crack language codes that confuse the learners of the text purely.
Moving body, active mind
The second is concreteness. The child's body is constantly moving – a rolling, raw, pointy, mouth-punching object.
Each action changes the incoming data stream and generates a fresh correlation between words and physical experience, such as linking words. cup With the feel of the plastic edges.
This type of “closed loop” activity allows children to use movement as part of their learning process to test their hypotheses on the fly.
Social immersion plays a third important role. Caregivers instinctively adjust speeches in real time, repeat words, exaggerate intonation, and change topics based on the child's attention.
AI systems are read from static datasets, but children receive a dynamic and personalized curriculum designed by the human mind that evolved for education.
Curiosity strengthens learning in young children
The fourth principle is progressive plasticity. Young brains are quickly reorganized, strengthening and pruning nerve connections according to experience.
This adaptability allows children to change their learning priorities, starting with sounds, then words, and then grammar, without throwing away their early knowledge.
Finally, there is motivation and curiosity. Perhaps most importantly, the toddler wants to decipher the world around him. They actively seek novelty, demand clarity, show visible joy when successful, and maintain daily language practice without explicit instructions.
“AI system process data…but kids really live it,” explained Roland. “Their learning is embodied in social and sensory contexts, interactive and deeply embedded.”
“They seek experience and dynamically adapt their learning accordingly. They explore objects with their hands and mouths, and raw towards new, exciting toys, which can be interesting.
What children can teach AI
The author believes that these insights can reconstruct machine learning strategies. Current large language models consume terabytes of written text.
To narrow the performance gap, engineers could potentially have multisensory input, motor exploration, and real-time social feedback loops, essentially giving silicon learners a simulated childhood.
“AI researchers have been able to learn a lot from babies,” Roland said. “If you want to learn languages and humans in machines, you probably need to rethink how you design them from scratch.”
Helping Languages to Prosper again
Beyond technology, this framework can illuminate adult second language acquisition, evolutionary linguistics, and educational practices.
For example, immersive, interactive classrooms may be superior to memorization drills, suggesting that recreating the perfect sensory context for young speakers can activate one-on-extinct language.
Toddlers simulated in the lab
The Rowland group has already tested the models with longitudinal recordings from multilingual families. Meanwhile, cognitive neuroscientists plan brain imaging studies to diagram the way sensory-motor loops sculpt linguistic circuits.
At the AI front, several labs are experimenting with embodied agents. It crawls through a virtual nursery and manipulates objects while linking words to experience.
It remains to be seen whether these synthetic infants will be able to close the 92,000-year deficit. What is clear is that the smallest linguists of humanity wield a secret that has not yet cracked even the largest neural network.
This study is published in the journal Cognitive Science Trends.
– –
Like what you read? Subscribe to our newsletter for compelling articles, exclusive content and the latest updates.
Check out us on EarthSnap, a free app provided by Eric Ralls and Earth.com.
– –
