Abstract

With the rapid development of GAN (generative adversarial network), recent years have witnessed an increasing number of tasks on reference-guided facial attributes transfer. Most state-of-the-art methods consist of facial information extraction, latent space disentanglement, and target attribute manipulation. However, they either adopt reference-guided translation methods for manipulation or monolithic modules for diverse attribute exchange, which cannot accurately disentangle the exact facial attributes with specific styles from the reference image. In this paper, we propose a deep realistic facial editing method (termed LMGAN) based on target region focusing and dual label constraint. The proposed method, manipulating target attributes by latent space exchange, consists of subnetworks for every individual attribute. Each subnetwork exerts label-restrictions on both the target attributes exchanging stage and the training process aimed at optimizing generative quality and reference-style correlation. Our method performs greatly on disentangled representation and transferring the target attribute's style accurately. A global discriminator is introduced to combine the generated editing regional image with other nonediting areas of the source image. Both qualitative and quantitative results on the CelebA dataset verify the ability of the proposed LMGAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call