summary: Researchers found that humans and AI share similar interactions between two learning systems. Experiments have shown that AI can develop in-context learning capabilities after extensive progressive practice, as humans do.
Both show trade-offs between flexibility and retention, with more difficult tasks enhancing memory, easier operation and more adaptive. These findings could shape the design of AI systems that function more intuitively with human cognition.
Important facts
- Share strategy: Both humans and AI use in complementary ways in context and progressive learning.
- Meta Learning Breakthrough: AI acquired flexible in-context learning only after thousands of incremental training tasks.
- trade off: Like humans, AI balances flexibility (quick rules learning) and retention (long-term memory updates).
sauce: Brown University
The new research found similarities about how humans and artificial intelligence integrate two types of learning, providing new insights into how people learn and ways to develop more intuitive AI tools.
Leading by Jake Rassin, a postdoctoral researcher in computer science at Brown University, the study was discovered by training AI systems that flexible, progressive learning modes interact with human working memory as well as long-term memory.
“These results help explain why humans appear to be rules-based learners in some situations, while in other situations they are progressive learners,” Rasin said. “They are also proposing something about what modern AI systems have in common with the human brain.”
Russin is a co-appointed at Michael Frank's Institute of Computational Brain Science, Professor of Cognitive and Psychological Sciences and director of Brawn's Carney Institute for Brain Science Institute, and associate professor of computer science, who leads the AI Institute for Brown's AI Assistant.
This study was published in Proceedings of the National Academy of Sciences.
Depending on the task, humans will retrieve new information in one of two ways. For some tasks, such as learning rules for TIC-TAC-Toe, learning “in-context” allows people to quickly grasp the rules after some examples. In other instances, incremental learning is based on information to improve understanding over time, such as the slow and sustained practices that come with learning to play songs on the piano.
Researchers knew that humans and AI would integrate both forms of learning, but it was not clear how the two learning types would work together. During the course of the research team's ongoing collaboration, Rasin developed a job that bridges machine learning and computational neuroscience, but developed the theory that dynamics can resemble the interactions of human working and long-term memory.
To test this theory, Rasin used it “Meta-Learning” – a kind of training that helps AI systems learn the learning itself – fiddle with the important properties of two learning types. This experiment revealed that the ability of AI systems to perform in-context learning after meta-learning through multiple examples has emerged.
One experiment employed from human experiments was tested by tackling AI to frame similar ideas and dealing with new situations: if you teach them about a list of colors and list of animals, can AI correctly identify color-animal combinations (such as a green giraffe)?
After learning AI meta by challenging 12,000 similar tasks, we acquired the ability to successfully identify new combinations of colour and animal.
The results suggest that faster, flexible in-context learning occurs after a certain amount of progressive learning takes place in both humans and AI.
“In the first board game, it takes time to figure out how to play,” Public said. “By the time you learn the 100th board game, you'll quickly get the rules of play, even if you've never seen that particular game before.”
The team also discovered a trade-off between learning retention and flexibility. Like humans, the more difficult it is for AI to properly complete tasks, the more likely it is to remember how to execute them in the future.
According to Frank, who studied this paradox in humans, this is because errors cued their brains to update information stored in long-term memory, whereas error-free actions learned in contexts increase flexibility, but do not involve long-term memory in the same way.
For Frank, who specializes in building biologically inspired computational models to understand human learning and decision-making, the team's work demonstrated how to provide new insights into the human brain by analyzing the pros and cons of various learning strategies in artificial neural networks.
“Our results are sure to hold together different aspects of human learning that neuroscientists have not previously grouped,” Frank said.
This work also suggests important considerations for developing intuitive and reliable AI tools, especially in sensitive domains such as mental health.
“To have a kind and reliable AI assistant, human and AI cognitions need to recognize how each function and the same range of things they do,” Pavlick said. “These discoveries are a great first step.”
Funding: This study was supported by the Naval Research Bureau and the Center for Biomedical Research Excellence, the National Institute of Medicine.
About this AI and learning research news
author: Kevin Stacy
sauce: Brown University
contact: Kevin Stacey – Brown University
image: This image is credited to Neuroscience News
Original research: Closed access.
“Parallel trade-offs between human cognition and neural networks: Dynamic interactions between in-context learning and weight learning,” Jake Russin et al. pnas
Abstract
Parallel trade-offs between human cognition and neural networks: Dynamic interactions between in-context learning and weight learning
Human learning embodies a prominent duality. Sometimes, logical rules can be quickly inferred and constructed to benefit from a structured curriculum (e.g. formal education), but in other cases, they rely on a gradual approach or trial and error to better learn from curricula with random, mutually leafy leaves.
Influential psychological theory explains this seemingly contradictory evidence of behavior by assuming two qualitatively different learning systems. It is for rapid rule-based inference (e.g., working memory) and slow and progressive adaptation (e.g., long-term and procedural memory).
How such a theory can be harmonized with neural networks remains unknown. This is a natural model of the latter, which is learned through incremental weight updates, but is clearly incompatible with the former.
However, recent evidence suggests that metallic ring neural networks and large-scale linguistic models are capable of in-context learning (ICL). This is the ability to flexibly infer the structure of a new task from several examples.
In contrast to standard weight learning (IWL), which is similar to synaptic changes, ICLs are more naturally linked to activation-based dynamics that are thought to be underlying human working memory.
Here we show that the interaction between ICL and IWL naturally links a wide range of learning phenomena observed in humans, including the effects of curriculum on the trade-offs between categorical learning tasks, constitutiveness, and the brain-behavior flexibility and retention.
Our research demonstrates how emergency ICLs equip neural networks with fundamentally different learning characteristics that can coexist with native IWLs, providing an integrated perspective on dual-process theories of human cognition.
