Abstract

Facial expression recognition is a challenging task owing to subtle inter-class differences and significant intra-class variations. To address this problem, we propose a novel dual-channel alternation training strategy, in which image pairs with different expressions from the same identity and image pairs with the same expression from different identities are alternately fed into a Siamese network for model training. Unlike previous studies, the extracted features from each branch of the Siamese network are disentangled into three feature subspaces, namely, an expression-related feature subspace, identity-related feature subspace, and shared feature subspace, to reduce the potential negative effects caused by expression-related features contaminated by identity components. To further enhance the ability to pull the same expressions together and push different expressions apart in the feature space, the Hilbert–Schmidt independence criterion (HSIC) is introduced to design an identity-sensitive and expression-sensitive loss function because of its excellent ability to measure the similarity between high-dimensional vectors. Comprehensive experiments on benchmark datasets demonstrate that the proposed approach can produce competitive recognition results compared with state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.