summary: EPFL researchers have developed a new machine learning algorithm called CEBRA. It can predict what mice will see based on their decoding of neural activity.
The algorithm maps brain activity to specific frames and, after an initial training period, can predict unseen movie frames directly from brain signals alone.
CEBRA can also be used to predict arm movements in primates and reconstruct the position of rats as they move through the arena, suggesting potential clinical applications.
Important facts:
- Researchers at the Ecole Polytechnic Federal de Lausanne (EPFL) have developed a machine learning algorithm called CEBRA. CEBRA learns structures hidden in the neural code to reconstruct what mice see when watching movies and arm movements in primates.
- CEBRA is based on contrastive learning. It is a technique that allows researchers to consider neural data and behavioral labels such as rewards, measured movements, and sensory features such as image color and texture.
- CEBRA’s strengths include its ability to combine data across modalities, limit nuances, and reconstruct synthetic data. This algorithm has potential applications in animal behavior, gene expression data, and neuroscience research.
sauce: EPFLMore
Is it possible to reconstruct what someone sees based on brain signals alone? not yet. But EPFL researchers have taken a step in that direction by introducing new algorithms to build artificial neural network models that capture brain dynamics with astonishing accuracy.
A new machine learning algorithm rooted in mathematics is called CEBRA (pronounced). zebra), learning the hidden structure of the neural code.
The information that CEBRA learns from raw neural data can be tested by decoding (a method used for brain-machine interfaces (BMI)) after training. Watch movies.
However, CEBRA is not limited to visual cortex neurons or brain data. Their work also shows that primate arm movements can be predicted and used to reconstruct the position of rats running freely around an arena.
This research Nature.
“This study is just one step towards the theory-backed algorithms needed to enable high-performance BMI in neurotechnology,” said Bertarelli, EPFL Chair of Integrated Neuroscience and Research PI. Mackenzie Mathis, who is
To learn the latent (i.e., hidden) structure of the mouse visual system, CEBRA extracts directly invisible movie frames from brain signals only, after an initial training period of mapping movie features with brain signals. Predictable.
Data used to decode the video were open access through the Allen Institute in Seattle, Washington. Brain signals are constructed by directly measuring brain activity via electrode probes inserted into the visual cortex region of the mouse brain or using genetically modified mice designed to make activated neurons glow green. obtained using an optical probe that

During training, CEBRA learns to map brain activity to specific frames. CEBRA works well with less than 1% of neurons in the visual cortex, given that the mouse brain region consists of about 500,000 neurons.
“Specifically, CEBRA is based on contrastive learning, a technique for learning how high-dimensional data can be arranged or embedded in a low-dimensional space called the latent space, so that similar data points are The closer you are to each other, the more different data you get, the further apart the points are,” explains Matisse.
“This embedding can be used to infer hidden relationships and structures in the data. Behavioral labels such as sensory features such as color and texture can be jointly considered. ”
“CEBRA outperforms other algorithms in reconstructing synthetic data, which is important for comparing algorithms,” said Steffen Schneider, co-first author of the paper. “Its strength also lies in its ability to combine data across modalities, such as movie features and brain data, which helps limit nuances such as alteration of the data depending on how it was collected.”
“CEBRA’s goal is to reveal the structure of complex systems. And given that the brain is the most complex structure in our universe, the brain is CEBRA’s ultimate test space. “We can gain insight into how the brain processes information, and by combining data across animals and species, it could be a platform for discovering new principles in neuroscience.” Matisse says.
“This algorithm is not limited to neuroscience research as it can be applied to many datasets containing time and joint information, such as animal behavior and gene expression data. Potential clinical applications are therefore exciting.” ”
About this machine learning research news
author: press office
sauce: EPFLMore
contact: Press Office – EPFL
image: Image credited to Neuroscience News
Original research: open access.
“Learnable Latent Embeddings for Joint Behavior and Neural Analysis” by Mackenzie Mathis et al. Nature
overview
Learnable Latent Embeddings for Joint Behavior and Neural Analysis
Mapping behavior to neural activity is a fundamental goal of neuroscience. As the ability to record large-scale neural and behavioral data improves, there is growing interest in modeling neural dynamics and investigating neural representations during adaptive behavior.
In particular, while neural latent embeddings can reveal correlations underlying behavior, nonlinear techniques that can explicitly and flexibly exploit joint behavior and neural data to reveal neural dynamics are lacking.
Here we fill this gap with a new encoding method, CEBRA. CEBRA jointly uses behavioral and neural data in a (supervised) hypothesis- or (self-supervised) discovery-driven manner to generate consistent, high-performance latent spaces. We show that consistency can be used as a metric to discover meaningful differences, and that inferred latent variables can be used for decoding.
We validate its accuracy and demonstrate the utility of the tool in both calcium and electrophysiology datasets, sensory and motor tasks, and simple or complex behaviors across species. Leverage single and multi-session datasets for hypothesis testing, or use unlabeled.
Finally, we used CEBRA to map the space, reveal complex kinematic features, generate a coherent latent space across two-photon and neuropixel data, and generate natural video from the visual cortex quickly and easily. Indicates that it can be decoded with high accuracy. .