This article is part of an exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
In the operating room, patients undergoing procedures under local anesthesia are conscious but may have difficulty expressing their pain level. Some people, such as infants and people with dementia, are unable to communicate these emotions at all. In a search for better ways to monitor patients’ pain, researchers developed a contactless method that analyzes a combination of patients’ heart rate data and facial expressions to estimate how much pain a patient is feeling. This approach is described in a study published on November 14th. IEEE Open Journal of Engineering in Medicine and Biology.
Bianca Reichardt, a researcher at the Institute for Applied Informatics in Leipzig, Germany, points out that camera-based pain monitoring avoids the need for patients to wear wired sensors, such as electrocardiogram electrodes or blood pressure cuffs, which can interfere with the delivery of medical care.
To achieve a non-contact approach, the researchers created a machine learning algorithm that can analyze aspects of pain that can be detected visually with a camera. First, the algorithm analyzes the nuances of a person’s facial expressions to estimate the level of pain.
The system also uses heart rate data through a technology called remote photoplethysmography (rPPG), which shines light onto a person’s skin. The amount of light reflected back can be used to detect changes in the amount of blood in blood vessels. The researchers initially considered including 15 different heart rate variability parameters measured by rPPG in the model and selected the top seven that were statistically most relevant for predicting pain, including heart rate maximum, minimum, and interval.
Training dataset for pain prediction model
The team used two different datasets to train and test their pain prediction model. One is an established and widely used pain measurement database called the BioVid Heat Pain Database. Researchers created this dataset in 2013 through an experiment in which a thermode gradually caused a measurable temperature increase on an individual’s skin. The researchers then recorded the participants’ physical reactions to the corresponding pain they felt.
A second dataset was developed by researchers for this new study. The pain levels of 29 patients undergoing cardiac surgery involving catheter insertion were measured at 5-minute intervals.
Importantly, while most other pain prediction algorithms are trained using very short video clips, Reichard and her team specifically used longer training videos (ranging from 30 minutes to 3 hours) of realistic surgical scenarios to train their model. For example, the training videos used may include scenarios where the lighting is not ideal or where the patient’s face is partially hidden from the camera. “This reflects a more realistic clinical situation compared to laboratory datasets,” Reichard explains.
Tests of their model show an accuracy of approximately 45% in predicting pain. Reichard said she was surprised at how accurate the model was. Considering the number of interruptions that occurred throughout the raw video footage, such as patients moving on the operating table or camera angle changes. Although many previously developed pain prediction models can achieve higher accuracy, they were trained using “ideal” short video clips without visual obstructions. Instead, the research team trained their model using more realistic, although less than ideal, video footage.
Additionally, Reichard points out that the team used a fairly simple statistical machine learning model. “More complex approaches, based on neural networks, for example, are likely to improve performance even further,” she says.
Reichard believes this type of research is valuable because it can support both patients and medical staff, and said he plans to develop a similar contactless system in the future that uses radar to measure patients’ vital signs in medical settings.
From an article on your site
Related articles on the web
