MIT's AI Can Detect Depression In Conversation

Posted: Aug 31 2018, 2:59pm CDT | by , Updated: Sep 1 2018, 1:23am CDT, in Latest Science News

 

This story may contain affiliate links.

MIT's AI can Detect Depression from Casual Conversation
Credit: MIT

MIT researchers have developed a neural-network model that can detect signs of depression on raw text and audio data from clinical interviews.

Artificial intelligence holds real potential for detecting depression. Many current machine-learning models can identify words or sentences in a patient’s interview that may indicate depression. But these tools available for diagnosing depression have their limitations because they rely on a person’s specific answers to specific questions.

To solve the problem, MIT researchers have developed a new neural-network model. The model can sift through raw text or data from interviews and can accurately predict if a person is depressed, regardless of any specific set of questions and answers.

The technology opens up a possibility of one day detecting signs of depression in natural conversation. This could be especially useful for those who are not aware of their illness and do not go to a clinician for diagnosis.

“The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech,” said lead author Tuka Alhanai, a researcher in the MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you want to deploy (depression-detection) models in scalable way … you want to minimize the amount of constraints you have on the data you’re using. You want to deploy it in any regular conversation and have the model pick up, from the natural interaction, the state of the individual.”

The MIT model has a remarkable ability to detect patterns indicative of depression and then apply it to a new individual without additional information. It doesn’t require specific questions or certain responses to draw a conclusion.

“We call it ‘context-free,’ because you’re not putting any constraints into the types of questions you’re looking for and the type of responses to those questions.” Alhanai said.

Researchers used a technique called sequence modeling, often used for speech processing, for the new model. They fed text and audio data from questions and answers from both depressed and non-depressed individuals into system and taught it to extract speech patterns that emerged for people with or without depression. For instance, words like “sad,” “low,” or “down,” may be paired with audio signals that are flatter, slower or more monotone.

“The model sees sequences of words or speaking style, and determines that these patterns are more likely to be seen in people who are depressed or not depressed,” said Alhanai. “Then, if it sees the same sequences in new subjects, it can predict if they’re depressed too.”

Researchers trained their model using a dataset of 142 interactions from the Distress Analysis Interview Corpus, which contains audio, text, and video interviews of patients with mental-health issues. In testing, the model was evaluated using metrics of precision and recall. Precision involves depressed subjects identified by the model that were also diagnosed as depressed. Recall, on the other hand, measures the accuracy of the model in detecting all subjects who were diagnosed as depressed in the entire dataset. The model scored 71 percent in precision and 83 percent in recall.

This story may contain affiliate links.

Loading...

Find rare products online! Get the free Tracker App now.


Download the free Tracker app now to get in-stock alerts on Pomsies, Oculus Go, SNES Classic and more.

Latest News

Comments

The Author

<a href="/latest_stories/all/all/47" rel="author">Hira Bashir</a>
The latest discoveries in science are the passion of Hira Bashir (). With years of experience, she is able to spot the most interesting new achievements of scientists around the world and cover them in easy to understand reporting.

 

 

Advertisement

comments powered by Disqus