Abstract

Automatic depression analysis has been widely investigated on face videos that have been carefully collected and annotated in lab conditions. However, videos collected under real-world conditions may suffer from various types of noises due to challenging data acquisition conditions and lack of annotators. Although deep learning (DL) models frequently show excellent depression analysis performances on datasets collected in controlled lab conditions, such noise may degrade their generalization abilities for real-world depression analysis tasks. In this paper, we uncovered that noisy facial data and annotations consistently change the distribution of training losses for facial depression DL models, i.e., noisy data-label pairs cause larger loss values compared to clean data-label pairs. Since different loss functions could be applied depending on the employed model and task, we propose a generic loss function relaxation strategy that can jointly reduce the negative impact of various noisy data and annotation problems occurring in both classification and regression loss functions, for face video-based depression analysis, where the parameters of the proposed strategy can be automatically adapted during depression model training. The experimental results on 25 different artificially created noisy depression conditions (i.e., five noise types with five different noise levels) show that our loss relaxation strategy can clearly enhance both classification and regression loss functions, enabling the generation of superior face video-based depression analysis models under almost all noisy conditions. Our approach is robust to its main variable settings, and can adaptively and automatically obtain its parameters during training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call