Before you learn how to add and subtract, you need to learn letters before you learn to read. The same principle applies to AI presented by a team of scientists.
At their work in the journal Nature Machine IntelligenceResearchers found that recurrent neural networks (RNNs) are better at handling more difficult and complex tasks later when they are trained on simple cognitive tasks.
The authors of the paper labeled this format of training Kindergarten curriculum study It first instills an understanding of basic tasks, then focuses on performing more challenging tasks with knowledge of these tasks.
“From a very early stage in life, we develop a set of basic skills such as maintaining balance and playing with the ball,” explains Christina Sabin, an associate professor at the New York University Center for Neural Science and Data Science.
“If you have experience, you can combine these basic skills to support complex movements, for example juggling some balls while riding a bike.
“Our work adopts these same principles when enhancing the functionality of RNNs. This allows you to first learn a set of simple tasks, save this knowledge, then apply a combination of these learned tasks to successfully complete a more refined task.”
RNNS – Neutral networks designed to process information sequentially based on stored knowledge are particularly useful for speech recognition and linguistic translation. However, with regard to complex cognitive tasks, training RNNs in existing ways proves difficult, and may not capture important aspects of animal and human behavior aimed at reproducing AI systems.
To address this, the study's authors also included David Hocker, a postdoctoral researcher at the NYU Data Science Center, and Christine Constantinople, a professor at the NYU Data Science Center.
Animals were trained to search for water sources in boxes with several compartmentalized ports. However, to know when and where water was available, rats needed to know that water delivery was associated with lighting of specific sounds and port lights, and that no water was provided immediately after these cues. To reach water, animals had to develop basic knowledge of multiple phenomena (e.g., waiting after visual and audio cues before they leave before they leave before they leave before they try to access the water).
These results pointed to the principles of how animals applied knowledge of simple tasks when they took on more complex tasks.
Scientists took these findings to train RNNs in a similar way, but instead of water recovery, the RNNS managed the betting task that required these networks to build on basic decisions to maximize over time. We then compared this kindergarten curriculum learning approach to existing RNN training methods.
Overall, team results showed that RNNS trained in the kindergarten model learned faster than those trained in the current way.
“AI agents need to pass through kindergarten first and then learn more about complex tasks later,” Savin said.
“Overall, these results show how to improve learning in AI systems and seek to develop a more holistic understanding of how past experiences affect learning new skills.”
Support for this research came from the National Institute of Mental Health and was conducted using research computing resources from the Imperial AI Consortium with support from the State of New York, the Simons Foundation and the Secunda Family Foundation.
Source: NYU
