Abstract

Image-to-image translation is the task of translating images between domains while maintaining the identities of the images. Generative Adversarial Networks (GANs), and in particular conditional GANs have recently shown incredible success in image-to-image translation and semantic manipulation. Such methods require paired data, meaning that an image must have ground-truth translations across domains. Cycle-consistent GANs solve this problem by using unpaired data. Such methods work well for translations that involve color and texture changes but fail when shape changes are required. This paper firstly analyzes the trade-offs between the cycle-consistency importance and the necessary shape changes required for natural looking imagery. We then propose computationally simple architectural and loss changes to allow the model to perform color, texture, and shape changes as required. The results demonstrate improved translations between domains that require shape changes. We additionally show how the embeddings learned by our model learn interesting and useful attention/segmentation information about the translated images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.