Abstract
Recently, emotion recognition from facial expressions has achieved unprecedented accuracy with the development of deep learning. Despite this progress, most existing emotion recognition methods are supervised and thus require extensive annotation. This issue is particularly pronounced in continuous domain datasets where annotation costs are very high. Furthermore, discrete domain datasets containing specific poses are too uniform to reflect complex and actual emotions. Existing methods that employ classification loss pay little attention to image similarity, making it difficult to distinguish similar emotions. To improve the learning ability for image similarity and reduce the annotation cost of continuous domain datasets, this research proposes a Semi-Supervised Emotion Recognition (SSER) method, which incorporates Activation-matrix Triplet loss (AMT loss) and pseudo label with Complementary Information (CI label). Specifically, the AMT loss is constructed by encoding multiple activation channels of an image as a matrix, which are utilized to capture the image similarity. The CI label firstly adopts the coupling effect of the complementary information from images and the multi-stage model for SSL to obtain high-confidence pseudo-labels. Then, entropy minimization and consistency regularization are used to improve the accuracy of pseudo labels. The SSER is evaluated on continuous domain datasets (AFEW-VA and AFF-Wild) and discrete domain datasets (FER2013 and CK+). The experimental results demonstrate that the SSER combined with AMT loss and CI label makes improvement for emotion recognition on continuous domain datasets, meanwhile the SSER is also desirable and effective for emotion recognition on discrete domain datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.