Abstract

Abstract Recently, thanks to the state-of-the-art techniques in Generative Adversarial Networks, a lot of work achieves remarkable performance on learning the mapping between an input image and an output image without any paired relation. However, traditional methods on image-to-image translation merely consider the visual appearance properties, they fail to maintain the true semantics of an image during the transfer learning procedure from source to target domain. We propose a new approach that utilizes GAN to translate unpaired images between domains and remain high level semantic abstraction aligned. Our model controls the hierarchical semantics of images by processing semantic information on label level and spatial level respectively by constructing label and attention consistent losses. The experimental results on several benchmark datasets show that generated samples are both visually similar with target images and semantically consistent with their source counterparts. Furthermore, the experiment also suggests that our method can effectively improve the classification performance in unsupervised domain adaptation problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call