Artificial intelligence systems designed with biologically-inspired architecture can simulate human brain activity before being trained on data, according to a new study from Johns Hopkins University.
The survey results are nature machine intelligencechallenges traditional approaches to building AI by prioritizing architectural design over the kind of deep learning and training that takes months, costs billions of dollars, and requires thousands of megawatts of energy.
“The way the AI field is progressing right now is to feed models with massive amounts of data and build up computing resources the size of a small city. You have to spend hundreds of billions of dollars to do that. Humans, on the other hand, learn to see using very little data,” said lead author Mick Bonner, assistant professor of cognitive science at Johns Hopkins University. “There may be good reasons why evolution converged on this design. Our study suggests that a more brain-like architectural design puts AI systems at a very advantageous starting point.”
Bonner and his team of scientists focused on three classes of network designs commonly used by AI developers as blueprints for building AI systems: transformers, fully connected networks, and convolutional networks.
Scientists iteratively modified three blueprints, or AI architectures, to build dozens of unique artificial neural networks. They then exposed these new, untrained AI networks to images of objects, people, and animals and compared the model’s response to brain activity in humans and primates exposed to the same images.
Modifying the transformer and fully connected networks by feeding them more artificial neurons showed little change. But by fine-tuning the architecture of convolutional neural networks in a similar way, researchers were able to generate patterns of AI activity that better simulated patterns in the human brain.
According to the researchers, untrained convolutional neural networks are comparable to traditional AI systems, which are typically exposed to millions or billions of images during training, suggesting that this architecture plays a more important role than researchers previously realized.
“If training on large amounts of data is really the key element, then there is no way to achieve a brain-like AI system just by changing the architecture,” Bonner said. “This means that by starting with the right blueprint and perhaps incorporating other insights from biology, we have the potential to dramatically accelerate learning in AI systems.”
Next, researchers are working to develop simple learning algorithms modeled on biology that can inform new deep learning frameworks.
reference: Kazemian A, Elmoznino E, Bonner MF. The convolutional architecture is newly aligned to the cortex. nat mach intel. 2025;7(11):1834-1844. doi: 10.1038/s42256-025-01142-3
This article has been reprinted from the following material: Note: Materials may be edited for length and content. Please contact the citation source for more information. You can access our press release publishing policy here.
