There was also a calculation angle. The team worked together Dense associative memoriesAn advanced type of network that builds and expands on the original Hopfield Model. These systems are well known for their robust memory capacity and exceptional pattern search capabilities.
“Unfortunately, what these densely associated memory networks get with memory capacity is lost in biological validity,” Kozachkov said. “So it's only natural that these networks can be implemented in biological hardware.”
Astrocytes quickly emerged as the most likely candidate as the team began to think about biological implementation. Their anatomy, spatial tissue, and biochemical dynamics all pointed to potential roles in memory.
Depending on how the system is tuned, the model can either act like a dense associative memory or employ transformer characteristics. This flexibility means there's more to it than a loose comparison with AI. It provides a practical approach to consider how the brain and modern machine learning systems solve similar problems.
“If our theory is correct, it has a wide range of meanings about how we think about memories in our brain, even if they are concepts or not specific details,” Kozachkov said. “Our theory suggests that memory can be encoded within the intracellular signaling pathway of a single astrocyte. Synaptic weights emerge from interactions within these pathways and interactions between astrocytes and synapses.”
The meaning of the theory on AI is equally provocative. Today's machine learning systems are struggling to remember. The ability to hold long-term information in neural networks is limited, and architectures such as attention layers and external memory units are usually used to overcome this. These components increase computational cost and complexity.
Some of the predictions the model makes are that disrupting intracellular signaling in astrocytes should affect memory recalls, and selective interference with astrocyte networks can impair certain types of learning. These ideas are testable, but technically challenging and can guide future work in both basic neuroscience and brain-inspired computing.
Of course, the model remains theoretical. Researchers have made it clear that their proposal is a framework rather than a conclusion.
“First and foremost, if the experimenters put in serious effort to deceive our models, that's great,” Kozachkov said. “I mean trying to prove it wrong. I'm so happy to work together in that effort.”
For now, this theory invites a broader rethinking of how intelligence is constructed.
“We are the beginning of a Cambrian explosion of intelligence,” Kozachkov said. “For the first time, we know how to build intelligent non-animal entities. This has a great deal of implications for neuroscience, which is difficult to exaggerate.”
He added that he believes neuroscience still has more to offer machine learning. “I don't think we're even exhausting ideas that we can take from our brains to build more intelligent systems. It's not a long shot.”
