Abstract

Since the Generative Adversarial Networks (GANs) was proposed, researches on image generation attract many scholars’ general attention and good graces. Traditional GANs generate a sample by playing a minimax game between generator and discriminator. In this paper, we propose a new method called EmotionGAN for generating facial expression. Specifically, the inverse of the generator is firstly utilized to establish the mapping between the input and feature vector. Then the Generalized Linear Model (GLM) is used to fit the changing direction of different expressions in the feature space, which provide a linear guidance to the feature vector along the expression axis, and thus spatial distribution consistence with the target feature vector is assured. Finally the generator is applied to reconstruct the facial image of the expression. By controlling the intensity of the feature vector, the generated image can be smoothly changed on a specific expression. Experiments have shown that EmotionGAN can quickly generate face images with arbitrary expressions while ensuring identity information is not changed, and the image attributes are more accurate and the resolution is higher.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.