Abstract

Deep learning (DL) models have been successfully applied in video-based affective computing, allowing, for instance, to recognize emotions and mood, or to estimate the intensity of pain or stress of individuals based on their facial expressions. Despite the recent advances with state-of-the-art DL models for spatio-temporal recognition of facial expressions associated with depressive behaviour, some key challenges remain in the cost-effective application of 3D-CNNs: (1) 3D convolutions usually employ structures with fixed temporal depth that decreases the potential to extract discriminative representations due to the usually small difference of spatio-temporal variations along different depression levels; and (2) the computational complexity of these models with consequent susceptibility to overfitting. To address these challenges, we propose a novel DL architecture called the Maximization and Differentiation Network (MDN) in order to effectively represent facial expression variations that are relevant for depression assessment. The MDN, operating without 3D convolutions, explores multiscale temporal information using a maximization block that captures smooth facial variations and a difference block that encodes sudden facial variations. Extensive experiments using our proposed MDN with models with 100 and 152 layers result in improved performance while reducing the number of parameters by more than <inline-formula><tex-math notation="LaTeX">$3\times$</tex-math></inline-formula> when compared with 3D ResNet models. Our model also outperforms other 3D models and achieves state-of-the-art results for depression detection. Code available at: <uri>https://github.com/wheidima/MDN</uri> .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.