Abstract

This paper provides the comparative analysis between two recent image-to-image translation models that based on Generative Adversarial Networks. The first one is UNIT which consists of coupled GANs and variational autoencoders (VAEs) with shared-latent space, and the second one is Star-GAN which contains a single GAN model. Given training data from two different domains from dataset CelebA, these two models learn translation task in two directions. The term domain denotes as a set of images sharing the same attribute value. So, the attributes that are prepared: eye glasses, blond hair, beard, smiling and age. Five UNIT models are trained separately, while only one Star-GAN model is trained. For evaluation, we conduct some experiments and provide a quantitative comparison using direct metric GAM (Generative Adversarial Metric) to quantify the ability of generalization and the ability of generating photorealistic photos. The experimental results show the superiority of cross-model UNIT over multi-model StarGAN on generating age and eye glasses attributes, and the equivalent performance to synthesize other attributes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call