AI model predicts new visual mechanisms in the real brain

Machine Learning


summary: For decades, neuroscience textbooks have taught that the first stages of visual processing rely on two types of cells specialized for detecting “edges,” the abrupt transitions between bright and dark. But an international team has broken this old model. By using AI to create a “digital twin” of a mouse neuron, researchers discovered a previously unknown third type of neuron with a two-part receptive field.

One part identifies textures (such as fur or feathers), and the other part recognizes specific placements (such as noses and mouths). This discovery explains how the brain separates complex objects from the background much more efficiently than simple edge detection alone.

important facts

  • Advantages of “digital twin”: The researchers used deep neural networks to simulate individual mouse neurons, allowing them to predict which particular images would cause cells to “fire” before testing them in a real brain.
  • Bipartite receptive field: Unlike traditional cells that respond only to brightness and direction, these new neurons have two distinct parts specialized for different functions. spatial frequency.
  • Texture and placement: One part of the cell responds to high-frequency details (tight patterns/textures), and the other part responds to low-frequency regions (broader shapes and arrangements).
  • Separating objects: These neurons are specifically tuned to the signals needed to distinguish between an object (such as a bird) and its background (such as a tree), a task that simple “edge” cells struggle with.
  • Science verified by AI: The Göttingen team’s AI predictions were confirmed by experiments in real mouse brains at Stanford University, proving that the AI ​​is not just “hallucinating” cells.

sauce: University of Göttingen

The visual cortex is the part of the brain that allows visual perception. In this region, millions of nerve cells called neurons process stimuli from the outside world. They only react when an object with certain characteristics comes into our field of vision.

According to textbooks, there are two main types of neurons in the first stage of the visual cortex that specialize in edges: abrupt transitions between light and dark.

This shows the eyes.
Newly discovered neurons in the visual cortex of mice feature a two-part receptive field that allows the brain to process texture and spatial arrangement separately to identify objects. Credit: Neuroscience News

Now, an international team of researchers from Stanford University and the University of Göttingen has used machine learning techniques to discover neurons in mice that share this cognitive processing using a previously unknown process in the brain. These neurons respond to different “spatial frequencies.” This means a change in the pattern of different objects in the field of view.

This research natural neuroscience.

To make this discovery, the researchers used deep neural networks, which are also used in AI models, to create digital twins of mouse neurons. These models can predict the activity of individual neurons, allowing us to systematically investigate which images most activate cells. Researchers from the University of Göttingen played a key role in the development of these digital twins.

“Neural networks are an essential tool for discovering new properties in large data sets, such as these new neuron properties,” explains Professor Fabian Schinz from the Institute of Computer Science at the University of Göttingen.

Professor Alexander Ecker of the institute emphasizes that “the best predicted images are not the imagination of an AI model.”

“Experiments in real mouse brains, led by researchers at Stanford University, confirm that the properties predicted by our model are real.”

Each neuron in the visual cortex is responsible for a specific area of ​​the visual field. Neurons respond only when the appropriate stimulus appears in the relevant part of the visual field, such as the edge in the upper left corner of the visual field.

The relevant area is known as the neuron’s “receptive field.” The classic textbook model distinguishes between two types of neurons in the visual system. “Simple cells” are stimulated when an edge (meaning an abrupt transition between light and dark) appears at a particular location in their receptive field. The other is “complex cells.” This also responds to edges, but as long as the edge has a preferred direction, it doesn’t matter its exact location. Both cell types are therefore specialized in detecting differences in brightness.

The newly discovered neuron has a two-part receptive field. One responds to textures, such as photo backgrounds or detailed patterns such as bird feathers. Other parts of the face, such as the mouth and nose, will only be stimulated if the pattern is placed accurately.

The key element is that both parts specialize in different “spatial frequencies”. This means how often a pattern such as a bar or pixel is repeated per unit distance. High frequencies represent dense patterns with fine details and sharp lines, while low frequencies represent coarse patterns with larger, more uniform areas.

“Classical simple and complex cells are tuned to simple edges defined by differences in brightness,” summarizes Professor Andreas Trias of Stanford University.

“In contrast, the two-part neurons we found respond to more complex information about edges, namely differences in texture and spatial frequency. These are exactly the kinds of signals needed to separate an object from its background.”

Answers to key questions:

Q: I thought we already knew how vision worked?

answer: I understand the basics! For 50 years, we thought the visual cortex was just an “edge detector.” However, this study found that it is much more like a high-tech photo editor. It’s not just about seeing lines. They simultaneously recognize the difference in the “texture” of the sweater and the “shape” of the person wearing it.

Q: What is a “digital twin” in neuroscience?

answer: This is a virtual model of a single brain cell. Scientists used AI to build a computer version of a mouse neuron that behaves exactly like the real thing. They can “twin” millions of images in seconds to see what reacts, and then go back to the actual mouse to see the results. It’s like a shortcut to understanding the brain’s software.

Q: Will this change the way we build AI for cameras?

answer: absolutely. Most computer vision today is still based on the old “edge detection” model. By mimicking these newly discovered two-part neurons, we could potentially build AIs that are much better at identifying objects in messy, cluttered environments, just like mice (or humans).

Editorial note:

  • This article was edited by the editors of Neuroscience News.
  • Journal articles were reviewed in full text.
  • Additional context added by staff.

About this visual neuroscience and AI research news

author: melissa sorich
sauce: University of Göttingen
contact: Melissa Solich – University of Göttingen
image: Image credited to Neuroscience News

Original research: Open access.
“Functional bipartite invariance in mouse primary visual receptive fields” Zhiwei Ding, Dat Tran, Kayla Ponder, Zhuokun Ding, Rachel Froebe, Lydia Ntanavara, Paul G. Fahey, Erick Cobos, Luca Baroni, Maria Diamantaki, Eric Y. Wang, Andersen Chang, Stelios Papadopoulos, Jiakun Fu, Taliah Written by Muhammad, Christos Papadopoulos, Santiago A. Cadena, Alexandros Evangelou, Konstantin Willeke, Fabio Anselmi, Sofia Sanborn, Jan Antolik, Emmanuel Hrudalakis, Saumil Patel, Edgar Y. Walker, Jacob Reimer, Fabian H. Sinz, Alexander S. Ecker, Katrin Franke, Sac Pitkow & Andreas S. Trias. natural neuroscience
DOI:10.1038/s41593-026-02213-3


abstract

Functional bipartite invariance in mouse primary visual receptive fields

Sensory systems support generalization by representing features that persist under input fluctuations. However, identifying the neuronal basis of these invariants remains difficult due to high-dimensional and nonlinear neural computations.

Here we exploit an inception loop paradigm that iterates between large-scale recordings, predictive models, and in silico experiments with in vivo validation to characterize neuronal constancy in the mouse primary visual cortex (V1). We synthesize different stimulation inputs (VEI) and different images to drive the target neuron.

These VEIs revealed a new bipartite invariance. One subfield encodes a shift-resistant high-frequency texture, and the other encodes a fixed low-frequency pattern. This division coincides with object boundaries defined by spatial frequency differences in highly activated images, suggesting a contribution to segmentation.

Analysis of the MICrONS dataset reveals a hierarchy of excitatory neurons in mouse V1 layer 2/3. Postsynaptic neurons showed greater invariance than presynaptic inputs, and neurons with lower invariance formed more connections.

Taken together, these results provide insight and a scalable methodology for mapping neuronal invariance.



Source link