Whether it’s studying French, reading Andy Weir’s latest novel, or hanging out with friends on St. Patrick’s Day, language is at the heart of all these everyday activities. Language, which seemed simple at an early age, turns out to be extremely complex and not constrained by one set of genes or one region of the brain. Cognitive neuroscientists are now using a variety of tools, including new genetic analyzes and AI, to gain insight into both healthy and disordered communication between individuals.
“We still tend to study language one level at a time, including genes, brain pathways, neural activity, behavior, and computation, without fully linking those levels into a coherent mechanistic explanation,” said Tamara Swerve, chair of the language symposium at the Cognitive Neuroscience Society (CNS) annual meeting in Vancouver, British Columbia. “But now we can study those connections at multiple levels and in more detail.”
This relatively novel and integrated approach has already produced results, from AI-based models that can test and potentially predict language development in children, to genetic studies linking rhythm disorders and dyslexia. These studies represent a dramatic change from previous research on: where language happens in the brain how it happened and why It varies greatly from person to person, says Swerve, of the University of California, Davis and the University of Birmingham in the UK, who studies how different factors influence language processing and understanding.
Driving this research is the researchers’ desire to understand how humans’ unique communication abilities shape what we learn, how we remember, and how our species has evolved. For Meta’s cognitive neuroscientist Jean Remy King, investigating how human language evolved means leveraging a new form of learning: AI deep learning models. The question, he says, is how humans can acquire language so efficiently, despite having orders of magnitude less exposure to words than today’s large-scale language models (LLMs), when other species are unable to reach similar abilities.
With the rise of small-scale and then large-scale language models, the use of artificial neural networks has effectively become the most efficient way to model and decode language representations in the brain. As these AI models learn, they follow specific learning trajectories, providing a source of new hypotheses and ideas about how children can effectively acquire language. ”
Jean-Rémi King, Cognitive Neuroscientist at Meta
In a new study, King et al. found that LLM can effectively explain the neural representation of language in both adults and children as young as two years old. Researchers, in collaboration with the Pediatric Epilepsy Department at Rothschild Foundation Hospital, examined neural activity recorded from more than 7,400 electrodes implanted in the brains of 46 children, teens, and adults with intractable epilepsy who had stereotaxic electrodes temporarily implanted before surgery.
“We found that their brain’s response to audiobooks can be accurately modeled using AI,” said King, who is scheduled to present at the CNS conference in Vancouver. The researchers found that higher-level language functions, such as grammar, continue to mature from ages 2 to 10 compared to lower-level functions, such as fast speech components. “While the underlying mechanisms are still unknown, this study provides the first convincing evidence that modern AI systems can provide powerful new insights into how language develops in the human brain,” Professor King said.
For Stephanie Volkel, a cognitive neuroscientist at Radboud University Nijmegen in the Netherlands, better understanding how language develops uniquely in each individual means taking a different approach: studying the brain wiring that connects language areas. Classical neuroscience points to “Broca’s area” or “Wernicke’s area,” as if language existed in two places. But after treating stroke patients with different types of brain injuries and different language disorders, Forkel quickly realized that language is “not a single ‘thing’ in the brain, but a system.” And understanding that system is the key to understanding neural fluctuations.
In a new study using ultra-high field 7 Tesla diffusion MRI, she and colleagues reconstructed seven major white matter pathways involved in language in 172 people. The researchers then asked whether the participants could be categorized into distinct “left-brain” or “right-brain” types when it comes to language. The answer was no, says Forkel. He will present this new work at CNS. “We found that language forms a continuum in the brain, rather than a clear category or binary. This challenges long-standing categorical models of hemispheric dominance and reshapes the way we think about individual differences.”
Forkel’s team is currently funding a new five-year project to understand the emergence of language from its biological basis. In addition to understanding how language is created, we are expected to understand how language is protected from injury and disease and how it is then recovered.
This research is in line with emerging research into understanding the genetic basis of language in the brain, which has received a significant boost in recent years with large, sometimes private, and often publicly available datasets. Rayna Gordon of Vanderbilt University Medical Center says these databases, whether from genetic sites like 23andMe or government-funded organizations like the National Institutes of Health, are combined with innovative new genetic analyzes to give researchers entirely new insights into the polygenic nature of language.
In fact, as Gordon emphasizes in his talk at Vancouver’s CNS, language is influenced by many genes. Although researchers cannot pinpoint the exact extent to which language skills derive from genetics and environment in a single individual, important patterns may emerge in large populations. “Thanks to publicly funded data resources, we have been able to begin extensive research in language genetics and connect it to its neural basis in some truly innovative ways.”
She and her team use questionnaires and other types of data specific to language and music development to triangulate data on the function of specific genes to show how genetic variation contributes to individual differences in language skills. For example, one recent study looked at 1 million 23andMe participants with and without dyslexia, in addition to a separate dataset that included language tests. They discovered multiple genes associated with dyslexia, which may contribute to early diagnosis and treatment of language disorders.
Importantly, these approaches allow researchers to combine insights from multiple large datasets, rather than relying on a single group of participants. This was not possible with traditional neuroscience research. “Using new techniques, we can actually do this data integration across data streams, which not only allows for basic scientific hypothesis development, but also potential clinical applications,” Gordon said.
In another study, Gordon and colleagues showed that language and music share a common biological basis that goes back to the genome. They identified 16 separate regions of the genome that are common to rhythm disorders and dyslexia. “We also looked at the overlap epidemiologically in a large sample, so rhythm disorders may actually be a risk factor for language and reading disorders,” she says.
Taken together, the research presented at the CNS conference on a multi-method approach to understanding language in the brain demonstrates the adaptive nature of the brain. “The human brain is not built from a rigid blueprint, but from an adaptable architecture,” says Forkel.
Indeed, session chair Mr. Swerve said: “Language understanding is a form of fast, adaptive cognition. By connecting stories from genes to the brain’s pathways and networks to the neural decoding and computational models that help explain how the brain understands and produces language, we can now more fully understand language comprehension.”
sauce:
Cognitive Neuroscience Society
