Interpretable machine learning helps clinicians classify EEG abnormalities

Machine Learning

A research team led by Duke University has developed a machine learning (ML) tool designed to help clinicians accurately read electroencephalogram (EEG) charts of intensive care unit patients.

EEG is currently the only reliable way to determine whether an unconscious patient is susceptible to seizures or will experience seizure-like symptoms. These symptoms can be life-threatening, but getting a good EEG reading can be difficult.

In an EEG, sensors attached to a patient's scalp record the brain's electrical signals in the form of wavy lines. During a seizure, these lines jump up and down, creating a distinctive pattern that many clinicians can easily recognize. However, seizure-like phenomena can manifest in more subtle ways that make them difficult to capture on an EEG.

“The brain activity we observe exists on a continuum, and seizures are at one end of the spectrum, but there are many events in between that can be harmful and require medication,” Brandon Westover, MD, PhD, associate professor of neurology at Massachusetts General Hospital and Harvard Medical School, explained in a news release. “The brainwave patterns caused by these events are more difficult to confidently recognize and classify, even for highly trained neurologists, who are not found in every medical facility. But doing so is crucial to the well-being of these patients.”

To enhance the detection of seizure-like events, researchers have turned to interpretable machine learning. Unlike black-box AI tools, interpretable models must provide the rationale for reaching a conclusion, which could be useful for healthcare applications such as EEG classification.

The researchers highlighted that while seizure-like phenomena often manifest themselves on an EEG as specific repeating graph shapes or lines, variability in EEG appearance, combined with the potential for data 'noise', can make the graphs difficult to read and interpret.

“The ground truth is there, but it's hard to read,” says Stark Guo, a doctoral student at Duke University who worked on the study. “There's ambiguity in a lot of these charts, so we had to train models to place decisions on a continuum rather than on clearly defined discrete segments.”

To develop the model, the team collected EEG waves from more than 2,700 patients and asked 120 experts to flag relevant features in each graph, classifying them as a seizure, one of four types of seizure-like events, or “other.”

Using this data, the model was tasked with visualizing each EEG on a chart. The chart, which resembles a multi-colored starfish, displays the continuum across which EEGs can exist. Each colored “arm” on the chart represents one type of seizure-like event, and the model places each EEG on an arm according to the certainty of its classification. EEGs placed near the end of the arm are EEGs the model is more confident in, and EEGs closer to the center of the chart are EEGs with less confidence.

In addition to the visualization, the model also reveals the EEG pattern used to make the determination and provides three example EEGs that have been expert-reviewed and annotated to appear similar to the one in question.

“This allows medical professionals to quickly see the key areas and either agree that a pattern exists or decide that the algorithm is off the mark,” says Alina Barnett, a postdoctoral researcher at Duke University. “Even if they're not highly trained in reading EEG, they can make a more educated decision.”

To test their model, the team asked eight experts to classify 100 EEG samples into categories, with or without the aid of machine learning. Without the tool, the group achieved an accuracy rate of 47%, which rose to 71% once the tool was applied. The group also outperformed participants who had used a black-box model for the same task in a previous study.

“Typically, we think of black box machine learning models as being more accurate, but in many of these important applications, that's just not true,” said Cynthia Rudin, Earl D. McLean, Jr. Professor of Computer Science and Electrical and Computer Engineering at Duke University. “When a model is interpretable, it's much easier to troubleshoot, and in this case, the interpretable model was indeed more accurate, and it gives us a bird's-eye view of the types of abnormal electrical signals occurring in the brain, which is incredibly helpful in caring for critically ill patients.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *