Phone app uses AI to detect depression from facial cues

Applications of AI


Dartmouth researchers report they have developed the first smartphone application that uses a combination of artificial intelligence and facial image processing software to reliably detect the onset of depression before users notice anything unusual. ing.

The app, called MoodCapture, uses a phone's front-facing camera to capture a person's facial expressions and surrounding environment during normal use, then evaluates the images for clinical clues related to depression. Masu. In a study of 177 people diagnosed with major depressive disorder, the app accurately identified early symptoms of depression with an accuracy of 75%.

These results suggest that with further development, the technology could be generally available within the next five years, say researchers based in the Department of Computer Science and the Geisel School of Medicine. Says.

The research team published their findings in the arXiv preprint database on February 27, ahead of presentation at the Association for Computing Machinery's CHI 2024 conference in May. Papers presented at CHI are peer-reviewed and published in the conference proceedings before being accepted.

“This is the first time that 'natural' images of nature have been used to predict depression,” said Andrew Campbell, corresponding author of the paper and the Albert Bradley 1915 3rd Century Computer Science Professor. To tell. “Ultimately, there is a movement in digital mental health technology to develop tools that can reliably, non-intrusively predict the mood of people diagnosed with major depression.”

“People use facial recognition software to unlock their phones hundreds of times a day,” Campbell says. Recently, it was discovered that he had locked his cell phone more than 800 times in one week.

“MoodCapture uses a similar technology pipeline of facial recognition technology with deep learning and AI hardware, so there is great potential to scale up this technology without any additional input or burden on users. ” he says. “When a person unlocks their phone, MoodCapture can understand the dynamics of their depression and suggest they seek help.”

In this study, the application took 125,000 images of participants over a 90-day period. People in the study consented to have their photos taken with their phone's front-facing camera, but they didn't know when it took place.

The first group of participants was used to program MoodCapture to recognize depression. People were randomly filmed using their cell phones' front-facing cameras as they responded to statements such as “feeling depressed, depressed, or hopeless.'' This prompt is from his Eight-Item Patient Health Questionnaire (PHQ-8), which clinicians use to detect and monitor major depression.

The researchers used image analysis AI on these photos, and MoodCapture's predictive models analyzed self-reports of depressed mood and specific facial expressions and environments, such as gaze, eye movements, head position, and muscle stiffness. We have made it possible to learn the correlation between the features of These include the main colors, lighting, location of the photo, and number of people in the image.

The concept is that every time a user unlocks their phone, MoodCapture analyzes a series of images in real time. The AI ​​model draws associations between facial expressions and background details that were found to be important in predicting depression severity. Over time, MoodCapture identifies image characteristics that are unique to you. For example, if someone consistently has a flat expression for a long time in a dimly lit room, an AI model might infer that the person is suffering from depression.

To test the predictive model, the researchers had another group of participants answer the same PHQ-8 questions, while MoodCapture took photos of the participants. The software analyzed these photos for indicators of depression based on the data collected from the first group. It was in this second group of his that MoodCapture AI correctly determined whether someone was depressed with her 75% accuracy.

“This points the way to a powerful tool for passively assessing a person's mood and using that data as the basis for therapeutic intervention,” Campbell said, adding that 90% accuracy is practical. He pointed out that this would be the sensor threshold. “My sense is that technology like this will be generally available within five years. We've proven that this is possible.”

Study co-author Nicholas Jacobson, assistant professor of biomedical data science and psychiatry in Dartmouth's Center for Technology and Behavioral Health, said MoodCapture encounters major depression on irregular timescales.

“Many of our therapeutic interventions for depression focus on long-term treatment, but these people experience ups and downs in their condition. Traditional assessments largely miss the true nature of depression. ” said Jacobson, director of the AI ​​and Mental Health: Technology-Induced Healthcare Innovation (AIM HIGH) Lab.

“Our goal is to capture the changes in symptoms that people with depression experience in their daily lives,” Jacobson says. “If we can use this to predict and understand rapid changes in depression symptoms, we can ultimately stop and treat them. , the effects of depression are reduced.”

Jacobson predicts that technology like MoodCapture could help bridge the huge gap between when people with depression need intervention and when they can actually access mental health resources. I am. On average, people spend less than 1% of their lives with a psychiatrist or other clinician. “The goal of these technologies is to provide more real-time support without putting additional strain on the health care system,” Jacobson says.

Ideally, an AI application like MoodCapture would suggest preventive measures, such as going out or contacting a friend, rather than explicitly telling a person that they may be falling into a state of depression. Jacobson says.

“Telling someone something bad is going on can make the situation even worse,” he says. “We believe MoodCapture opens the door to assessment tools that can help detect depression in the moments before it worsens. This type of work would have been unimaginable a little more than a decade ago.”

The research is based on Dr. Jacobson's grant from the National Institute of Mental Health to study the use of deep learning and passive data collection to detect symptoms of depression in real time. The study also builds on his 2012 study led by Campbell's lab that collected passive and automatic data from participants' cell phones to assess their mental health at Dartmouth College. I am.

But advances in smartphone cameras since then have allowed researchers to take clearer, more passive photos like those taken during regular phone use, Campbell said. Campbell is director of emerging technologies and data analytics at the Center for Technology and Behavioral Health, where he is leading a team developing mobile sensors that can track metrics such as emotional state and work performance based on passive data. .

New research shows that passive photography is key to the success of mobile-based treatment tools, Campbell says. They capture mood more accurately and more frequently than user-generated selfies, and don't hold users back by requiring active engagement. “These neutral photos are like seeing someone in a moment without makeup, which improved the performance of our facial expression prediction model,” Campbell says.

Subigya Nepal, a Guarini Graduate School of Advanced Studies doctoral candidate in Campbell's research group and co-lead author of the study along with Guarini doctoral student Arvind Pillai, said that It said steps include training AI in more diverse environments. Improving diagnostic capabilities and strengthening privacy measures.

Researchers envision an iteration of MoodCapture that would ensure that photos never leave a person's phone, Nepal says. Instead, the photo is processed on the user's device to extract expressions associated with depression and converted into code for the AI ​​model. “Even if the data leaves the device, there is no way to convert it back into an image that identifies the user,” he says.

On the other hand, if AI is designed to extend its knowledge based on the facial expressions of the particular person using it, it could improve the accuracy of applications on the consumer side, Nepal said. There is.

“We don't have to start from scratch. We know that a typical model is 75% accurate, so we can use specific person's data to fine-tune the model. In the next few years. , devices should be able to easily accommodate this,” says Nepal. “We know that facial expressions indicate emotional states. Our research shows that when using technology to assess mental health, they are the most important signals we can get. It's a proof of concept that it's one of those things.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *