Abstract

This paper presents a deep neural architecture for synthesizing the frontal and neutral facial expression image of a subject given a query face image with arbitrary expression. This is achieved by introducing a combination of feature space perceptual loss, pixel-level loss, adversarial loss, symmetry loss, and identity-preserving loss. We leverage both the frontal and neutral face distributions and pre-trained discriminative deep perceptron models to guide the identity-preserving inference of the normalized views from expressive profiles. Unlike previous generative methods that utilize their intermediate features for the recognition tasks, the resulting expression- and pose-disentangledface image has potential for several downstream applications, such as facial expression or face recognition, and attribute estimation. We show that our approach produces photorealistic and coherent results, which assist the deep metric learning-based facial expression recognition (FER) to achieve promising results on two well-known FER datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.