Can AI learn like we do?

Machine Learning


Print, PDF, Email

It reads, speaks, collates vast amounts of data, and suggests business decisions. Today's artificial intelligence may seem more human than ever before. But AI still has some significant shortcomings. Kyle Daruwala, NeuroAI researcher at Cold Spring Harbor Laboratory (CSHL), explains:

“ChatGPT and current AI technology are all great, but they are still limited when it comes to interacting with the real world. Even tasks like solving math problems or writing essays require billions of training examples before they can become good at them.”

Daruwala has been searching for new, unconventional ways to design AI that can overcome these computational obstacles—and he may have found it.

The key is moving data. Currently, most of a modern computer's energy consumption comes from moving data around, which can take a very long time in an artificial neural network made up of billions of connections. So to find a solution, Daruwala turned to one of the most computationally powerful and energy-efficient machines in existence: the human brain.

CSHL NeuroAI Researcher Kyle Daruwalla talks about making AI more energy efficient and accessible to everyone.

Daruwala designed a new way for AI algorithms to move and process data more efficiently, based on the way the human brain takes in new information. The design allows individual AI “neurons” to receive feedback and adjust on the fly, rather than waiting for the entire circuit to update at the same time. This way, data doesn't have to travel as far and is processed in real time.

“In our brains, connections are constantly shifting and adjusting,” Daruwala says. “It's not like we pause everything, adjust it, and then go back to our old selves.”

Image of a schematic diagram of machine learning
A comparison of a typical machine learning model (A) and Daruwalla's new design (B). Row A shows that the input or data must pass through all layers of the neural network before the AI ​​model receives feedback, which takes more time and energy. In contrast, row B shows the new design in which feedback can be generated and incorporated at each network layer.

The new machine learning model provides evidence for an as yet unproven theory that correlates working memory with learning and academic performance — a cognitive system that enables us to focus on a task while recalling our stored knowledge and experiences.

“In neuroscience, there are theories about how working memory circuits drive learning, but nothing as concrete as our rules that actually link the two. And that's one of the neat things we stumbled upon here. This theory led to the rule that in order to regulate each synapse individually, you need to have working memory next to it.”

Daruwalla's design could usher in a new generation of AI that learns like humans. Not only would it make AI more efficient and accessible, it would mark something of a full-circle moment for neural AI. Long before ChatGPT uttered its first digital syllables, neuroscience has been providing AI with valuable data. Soon, AI may return the favor.

Written by: Luis Sandoval, Communications Specialist | Email: 516-367-6826


Funding

Print, PDF, Email

US Air Force Research Laboratory, National Science Foundation

Quote

Print, PDF, Email

Daruwala, K. others., “Hebbian learning rule based on information bottleneck naturally links working memory and synaptic updating.” The forefront of computational neuroscienceMay 16, 2024. DOI: 10.3389/fncom.2024.1240348

Stay up to date

Sign up for our newsletter and get a roundup of our latest discoveries, upcoming events, videos, podcasts and news delivered straight to your inbox every month.

Newsletter Signup





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *