Abstract

Facial expression recognition (FER) is one of the most important computer vision tasks for understanding human inner emotions. However, the poor generation ability of the FER model limits its applicability due to tremendous intraclass variation. Especially for expressions of varying intensities, the appearance differences among weak expressions are subtle, which makes FER tasks challenging. In response to these issues, this paper presents a dual subspace manifold learning method based on a graph convolutional network (GCN) for intensity-invariant FER tasks. Our method treats the target task as a node classification problem and learns the manifold representation using two subspace analysis methods: locality preserving projection (LPP) and peak-piloted locality preserving projection (PLPP). Inspired by the classic LPP, which maintains local similarity among data, this paper introduces a novel PLPP that maintains the locality between peak expressions and non-peak expressions to enhance the representation of weak expressions. This paper also reports two subspace fusion methods, one based on a weighted adjacency matrix and another on a self-attention mechanism, that combine the LPP and PLPP to further improve FER performance. The second method achieves a recognition accuracy of 93.83% on the CK+, 74.86% on the Oulu-CASIA and 75.37% on the MMI for weak expressions, outperforming state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.