Abstract

Facial expression synthesis has drawn increasing attention in computer vision, graphics and animation. Recently, generative adversarial nets (GANs) have become a new perspective for face synthesis and have had remarkable success in generating photorealistic images and image-to-image translation. In this study, the authors present an appearance-based facial expression synthesis framework, ApprGAN, by combining shape and texture and introducing cycle consistency and identity mapping into the adversarial learning. Specifically, given an input face image, a pair of shape and texture generators are trained for synthetic shape deformation and expression detail generation, respectively. Extensive experiments on expression synthesis and cross-database synthesis were conducted, together with comparisons with the existing methods. Results of expression synthesis and quantitative verification on various databases show the effectiveness of ApprGAN in synthesising photorealistic and identity-preserving expressions and its marked improvement over the existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call