Abstract

In this paper, we propose a framework to perform Generative Adversarial Network (GAN) inversion using semantic segmentation map to invert input image into the GAN latent space. Generally, it is still difficult to invert semantic information of input image into GAN latent space. In particular, conventional GAN inversion methods usually suffer from inverting accurate semantic information such as shape of glasses and hairstyle. To this end, we propose a framework that uses the semantic segmentation map of the real image to guide the latent space corresponding to feature map with coarse resolution in the Style-GANv2. Experimental results show that our proposed method generates more accurate images and is possible of detailed editing of input images with a variety of semantic information compared with previous GAN inversion methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call