Abstract

This paper discusses the multi-content disentanglement issue in unsupervised image transfer model. Image transfer based on generative model such as VAE1 or GAN2 can be defined as mapping data from source domain to target domain. Existing disentanglement methods have focused on separating elements of latent vector to distinguish content and style information from an image. However, since it has focused on extracting information from all pixels, it is hard to perform image transfer while controlling specific contents. To solve this problem, image transfer which is able to control a specific content disentanglement has been suggested recently. In this paper, by adapting the disentanglement concept to control various specific contents in a image, we propose a suitable architecture for image transfer task such as adding or subtracting multiple contents. In addition, we also propose an adversarially-learned auxiliary discriminator to further improve the quality of synthesized images from the multi-content disentanglement method. Based on the proposed method, we can generate images by controlling two contents from the CelebA dataset, and prove that we can attach specific content more clearly with auxiliary discriminator.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.