Abstract

This paper provides the comparative analysis between two recent image-to-image translation models that based on Generative Adversarial Networks. The first one is UNIT which consists of coupled GANs and variational autoencoders (VAEs) with shared-latent space, and the second one is Star-GAN which contains a single GAN model. Given training data from two different domains from dataset CelebA, these two models learn translation task in two directions. The term domain denotes as a set of images sharing the same attribute value. So, the attributes that are prepared: eye glasses, blond hair, beard, smiling and age. Five UNIT models are trained separately, while only one Star-GAN model is trained. For evaluation, we conduct some experiments and provide a quantitative comparison using direct metric GAM (Generative Adversarial Metric) to quantify the ability of generalization and the ability of generating photorealistic photos. The experimental results show the superiority of cross-model UNIT over multi-model StarGAN on generating age and eye glasses attributes, and the equivalent performance to synthesize other attributes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.