Abstract

Diagnosing depression is a challenge due to the subjective nature of traditional tools like questionnaires and interviews. Researchers are exploring alternative methods for detecting depression, such as using facial and vocal features. This study investigated the potential of facial and vocal features for depression detection using two datasets: images of facial expressions with emotion labels, and a vocal expression dataset with positive and negative words. Four deep-learning models were evaluated for depression detection from facial expressions, and two traditional machine-learning models were trained for sentiment analysis on the vocal expression dataset. The CNN model performed best for facial expression analysis, while the Naive Bayes model performed best for vocal expression analysis. The models were integrated into a web application for depression analysis, allowing users to upload a video and receive an analysis of their facial and vocal expressions for signs of depression. This study demonstrates the potential of using facial and vocal features for depression detection and provides insight into the performance of different machine learning algorithms for this task. The web application has the potential to be a useful tool for individuals monitoring their mental health and may support mental health professionals in their clinical assessments of depression.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call