Abstract

Depression has been considered the most dominant mental disorder over the past few years. To help clinicians effectively and efficiently estimate the severity scale of depression, various automated systems based on deep learning have been proposed. To estimate the severity of depression, i.e., the depression severity score (Beck Depression Inventory-II), various deep architectures have been designed to perform regression using the Euclidean loss. However, they do not consider the label distribution, and they do not learn the relationships between the facial images and BDI-II scores, which can be resulting in the noisy labeling for automatic depression estimation (ADE). To mitigate this problem, we propose an automated deep architecture, namely the self-adaptation network (SAN), to improve this uncertain labeling for ADE. Specifically, the architecture consists of four modules: (1) ResNet-18 and ResNet-50 are adopted in the deep feature extraction module (DFEM) to extract informative deep features; (2) a self-attention module (SAM) is adopted to learn the weights from the mini-batch; (3) a square ranking regularization module (SRRM) to create high partitions and low partitions is proposed; and (4) a re-label module (RM) is used to re-label the uncertain annotations for ADE in the low partitions. We conduct extensive experiments on depression databases (i.e., AVEC2013 and AVEC2014) and obtain a performance comparable to the performances of other ADE methods in assessing the severity of depression. More importantly, the proposed method can learn valuable depression patterns from facial videos and obtain a performance comparable to the performances of other methods for depression recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.