Abstract

In recent years, significant progress has been made in the auxiliary diagnosis system for depression. However, most of the research has focused on combining features from multiple modes to enhance classification accuracy. This approach results in increased space-time overhead and feature synchronization problems. To address this issue, this paper presents a single-modal framework for detecting depression based on changes in facial expressions. Firstly, we propose a robust method for extracting angle features from facial landmarks. Theoretical evidence is provided to demonstrate the translation and rotation invariance of these features. Additionally, we introduce a flip correction method to mitigate angle deviations caused by head flips. The proposed method not only preserves the spatial topological relationship of facial landmarks, but also maintains the temporal correlation between frames preceding and following the facial landmarks. Finally, the GhostNet network is employed for depression detection, and the effectiveness of various modal data is compared. In the depression binary classification task using the DAIC-WOZ dataset, our proposed framework significantly improves the classification performance, achieving an F1 value of 0.80 for depression detection. Experimental results demonstrate that our method outperforms other existing depression detection models based on a single modality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call