Abstract

The unpaired image-to-image translation aims to translate input images from one source domain to some desired outputs in a target domain by learning from unpaired training data. Cycle-consistency constraint provides a general principle to estimate and measure forward and backward mapping functions between two domains. In many cases, the information entropy of images from the two domains is not equal, resulting in an information-rich domain and an information-poor domain. However, existing solutions based on cycle-consistency either completely discard the information asymmetry between the two domains (a common choice), which leads to inferior translation performance for the asymmetric unpaired image-to-image translation, or have to rely on special task-specific designs and introduce extra loss components. These elaborative designs especially for the relatively harder translation direction from the information-poor domain to the information-rich domain (poor-to-rich translation) require extra labor and are limited to some specific tasks. In this paper, we propose a novel asynchronous generative adversarial network named Async-GAN, which provides a model-agnostic framework for easily turning symmetrical models into powerful asymmetric counterparts that can handle asymmetric unpaired image-to-image translation much better. The key innovation is to iteratively build gradually improving intermediate domains for generating pseudo paired training samples, which provide stronger full supervision for assisting the poor-to-rich translation. Extensive experiments on various asymmetric unpaired translation tasks demonstrate the superiority of the proposal. Furthermore, the proposed training framework could be extended to various Cycle-GAN solutions and achieve a performance gain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call