AI deep learning decodes hand gestures from brain images

Machine Learning


Geralt/Pixabay

Geralt/Pixabay

Brain Computer Interface (BCI), also known as Brain Machine Interface (BMI), offers hope to people who have lost their ability to move and communicate. The pattern recognition capabilities of artificial intelligence (AI) are accelerating innovation. A new study from the University of California, San Diego (UC San Diego) cerebral cortexIn Oxford Journal, we show how AI machine learning can decode hand gestures from magnetoencephalography (MEG) brain images, a non-invasive imaging technique.

“Our MEG-RPSnet model outperforms two state-of-the-art neural network architectures and conventional machine learning methods for EEG-based BCI, and is comparable or superior to machine learning methods employing invasive cortical electrocardiography. Mingxiong Huang, co-director of the University of California, San Diego MEG Center at Qualcomm Laboratories and senior author with researchers Yifeng Bu, Deborah L Harrington, Roland R Lee, Qian Shen, and Annemarie Angeles-Quinto , Zhengwei Ji, Hayden Hansen, Jaqueline Hernandez-Lucas, Jared Baumgartner, Tao Song, Sharon Nichols, Dewleen Baker, Ramesh Rao, Imanuel Lerman, Tuo Lin, and Xin Ming Tu.

Magnetoencephalography is a noninvasive neuroimaging method that maps brain activity by measuring the magnetic fields produced by electrical currents in the brain. MEG allows real-time tracking of brain activation sequences and millisecond temporal resolution.

Here’s how MEG works: The brain produces an electromagnetic field produced by the net effect of charged ions flowing through cells. Thousands of neurons excited together produce a measurable magnetic field on the outside of the head. The neuromagnetic signals produced by the brain are small and require special sensors. The MEG Scanner is equipped with a Superconducting Quantum Interference Device (SQUID) sensor.

In this study, a team at the University of California, San Diego used a helmet consisting of an implanted 306 sensor array to sense magnetic fields produced by brain currents flowing between neurons. Twelve participants were instructed to put on his MEG helmet and randomly perform the rock-paper-scissors or scissors hand gestures used in the rock-paper-scissors game. The MEG helmet collected images of participants’ brain activity during the gesture.

Researchers used an AI Convolutional Neural Network (CNN) deep learning algorithm to learn to classify gestures.

“We obtained an average classification accuracy of 85.56% in 12 subjects on a single-trial basis,” reported researchers from the University of California, San Diego.

Researchers found two specific regions that AI deep learning models could classify with comparable results to whole-brain models. “Surprisingly, we also found that deep learning models achieve similar classification performance to whole-brain sensor models when using only mid-parietal-occipital region sensors or occipitotemporal region sensors,” said Dr. scientists say.

Researchers at the University of California, San Diego, harnessed the pattern recognition capabilities of AI deep learning trained on non-invasive brain imaging data to develop a non-invasive method to help paralyzed and lost people. achieved a proof-of-concept that could lead to a smart brain-computer interface. ability to speak.

“Taken together, these results show that noninvasive MEG-based BCI applications show promise for future BCI development in hand gesture decoding,” the scientists concluded.

Copyright © 2023 Kami Rosso All rights reserved.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *