Through the peephole: If this week’s science news holds any grounds, it won’t be long before Big Brother starts to peek inside our heads. Following a US scientist who revealed his GPT model that decodes human thoughts into words, a Swiss researcher has demonstrated a machine learning model that converts neural activity in mice into video.
Researchers at the Polytechnic University of Lausanne (EPFL) have developed a machine learning algorithm called CEBRA (pronounced zebra, short for zebra). C.Insistent E.metersB.high dimensional border R.recording using aauxiliary variable. In layman’s terms, CEBRA is a model that can decode images from the mouse brain.
The team has been working on this project for over a decade and made early breakthroughs in decoding rudimentary shapes from human and animal EEG activity. Scientists at EPFL have now deciphered entire movie clips from mouse thought patterns with the help of advanced machine learning.
In their experiments, the researchers used measurements from two types of mice: mice with electrodes inserted into their visual cortex and a series of genetically engineered mice whose neurons glow green when active. Some mice were shown a black-and-white clip of a man running to a car and taking something out of the trunk. Data from this subset of mice were used to train CEBRA to associate brain activity with each video frame.
A second group was shown the same movie while CEBRA processed brain activity. The decoded video matched the actual clip, except for a few stutters, probably caused by the mouse moving around and not paying enough attention.
Does this mean that we are approaching technology that can project a person’s memories and dreams onto a movie screen or computer monitor? Not exactly. Note that CEBRA was able to do this because it was previously trained using video clips. It’s not hard to imagine that the model is advanced enough to recognize images without the specific pre-training seen here, but that’s not possible at the moment.
Scientists see its breakthrough as a new research tool. They say CEBRA will provide insight into neural function and how the brain interprets stimuli. They believe this will prove useful in diagnosing and treating brain disorders such as Alzheimer’s disease and Tourette’s syndrome.
That said, other research coming out this week reveals that we’re closer than ever to using machines to read our minds. Scientists at the University of Texas at Austin have developed a GPT model that analyzes brain activity in fMRI scans and decodes them into words with pretty amazing accuracy. The technology is far from perfect, but it’s the first time a machine can decipher thoughts into complex verbal descriptions rather than single words or very short phrases.
You can check out a preprint of the CEBRA study on Cornell University’s arXiv. They also posted the software files to his GitHub for other neuroscientists to use.
