Abstract
Many clinical studies have shown that facial expression recognition and cognitive function are impaired in depressed patients. Different from spontaneous facial expression mimicry (SFEM), 164 subjects (82 in a case group and 82 in a control group) participated in our voluntary facial expression mimicry (VFEM) experiment using expressions of neutrality, anger, disgust, fear, happiness, sadness and surprise. Our research is as follows. First, we collected a large amount of subject data for VFEM. Second, we extracted the geometric features of subject facial expression images for VFEM and used Spearman correlation analysis, a random forest, and logistic regression-based recursive feature elimination (LR-RFE) to perform feature selection. The features selected revealed the difference between the case group and the control group. Third, we combined geometric features with the original images and improved the advanced deep learning facial expression recognition (FER) algorithms in different systems. We propose the E-ViT and E-ResNet based on VFEM. The accuracies and F1 scores were higher than those of the baseline models, respectively. Our research proved that it is effective to use feature selection to screen geometric features and combine them with a deep learning model for depression facial expression recognition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.