Abstract

Previous works on generative adversarial networks (GANs) mainly focus on how to synthesize high-fidelity images. In this paper, we present a framework to leverage the knowledge learned by GANs for semantic face manipulation. In particular, we propose to control the semantics of synthesized faces by adapting the latent codes with an attribute prediction model. Moreover, in order to achieve a more accurate estimation of different facial attributes, we propose to pretrain the attribute prediction model by inverting the synthesized face images back to the GAN latent space. As a result, our method explicitly considers the semantics encoded in the latent space of a pretrained GAN and is able to faithfully edit various attributes like eyeglasses, smiling, bald, age, mustache and gender for high-resolution face images. Extensive experiments show that our method has superior performance compared to state of the art for both face attribute prediction and semantic face manipulation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.