Machine learning helps diagnose seizures in unconscious patients

Machine Learning


Researchers at Duke University have created an assisted machine learning model that significantly improves medical professionals' ability to read electroencephalogram (EEG) charts of intensive care patients. New England Journal of Medicine AI.

Machine learning helps diagnose seizures in unconscious patients
This starfish-like graph is a visual representation of how a new AI algorithm can help medical professionals read the brainwave patterns of patients at risk for brain damage from a seizure or seizure-like event. Each different colored arm represents one type of seizure-like event that the brainwaves may indicate. The closer the algorithm moves a particular graph to the tip of the arm, the more certain it will be in its decision, while closer to the center there is less uncertainty. Image courtesy of Duke University.

This computational technique has the potential to save thousands of lives a year because measuring EEG is the only way to determine whether an unconscious person is having a seizure-like episode or is at risk of having a seizure.

An EEG uses small sensors attached to the scalp to analyze the electrical signals coming from the brain. The result is long curves that go up and down. When a patient is having a seizure, these lines suddenly jump up and down like a seismograph — a clear sign. But it can be difficult to distinguish a seizure-like phenomenon from other medically significant abnormalities.

The brain activity we observe exists as a continuum, with seizures at one end, but many events in between that can be harmful and require medication. The brainwave patterns caused by these events are more difficult to confidently recognize and classify, even for highly trained neurologists, who are not available in every medical facility. But doing so is crucial to the well-being of these patients..

Dr. Brandon Westover, Associate Professor of Neurology at Harvard Medical School

The doctors turned to the lab of Cynthia Rudin, the Earl D. McLean, Jr. Professor of Computer Science and Electrical and Computer Engineering at Duke University, to develop the tools for these assessments. Rudin's specialty is creating “interpretable” machine learning algorithms, and she and her colleagues have done so.

Most machine learning models are essentially “black boxes” meaning it is impossible for humans to know how they reached their conclusions, but an interpretable machine learning model must be able to demonstrate its behavior.

To classify more than 2,700 EEG samples into one of four categories of seizures, seizure-like episodes, or “other,” the team first asked more than 120 experts to identify relevant elements in the graphs.

On an EEG chart, every event shows up as a clear pattern or repetition within the wavy lines. However, these charts are rarely static, so inaccurate data can hide warning signs or the data can blend together to create an absurd picture.

The ground truth is there, but it's difficult to read: There is inherent ambiguity in many of these charts, so we had to train models to place decisions on a continuum rather than in clearly defined, discrete bins..

Stark Guo, PhD student at Duke University

This continuum visually resembles a colorful starfish fleeing a predator. Each different colored arm represents one type of seizure-like event. The closer the algorithm places a particular chart to the tip of the arm, the more confident it is in its decision. Charts with less confidence are placed closer to the central body.

In addition to this visual classification, the algorithm highlights the EEG patterns that were used to reach a conclusion. The algorithm provides three sample medically diagnosed charts that it considers to be comparable.

This allows medical professionals to quickly see the key areas and either agree that a pattern exists or decide that the algorithm is off the mark – allowing them to make a more educated decision, even if they are not highly trained in reading EEG..

Alina Barnett, Postdoctoral Research Fellow, Duke University

The team tested the system by having eight medical professionals with relevant experience classify 100 EEG samples into six categories, with and without artificial intelligence assistance.

All participants saw a significant improvement in performance, increasing their accuracy from 47% to 71% on average. Moreover, the results surpassed those of a previous study using a comparable “black box” approach.

Cynthia Rudin said:We usually think of black-box machine learning models as more accurate, but for many critical applications like this, that's not true. If the model is interpretable, it's much easier to troubleshoot. And in this case, the interpretable model was more accurate. And it gives us a bird's-eye view of the types of abnormal electrical signals occurring in the brain, which can help us take care of critically ill patients.. “

The research was funded by the National Science Foundation, the National Institutes of Health and a DHHS Nebraska Stem Cell Grant.

Journal References:

Barnett, J.A. others(2024) Using interpretable machine learning to improve clinician performance in classifying EEG patterns across the ictal-interictal injury continuum. New England Journal of Medicine AI. translation:

Source: https://pratt.duke.edu/



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *