Abstract

Recently, image-to-image translation has attracted the interest of researchers, which purpose is to learn a mapping between two image domains. However, image translation will become an intrinsically ill-posed problem when given unpaired training data, that is, there are infinite mappings between two domains. Existing methods usually fail to learn a relatively accurate mapping, leading to poor quality of generated results. We believe that if the framework can focus more on the translation of important object regions instead of irrelevant information, such as background, then the difficulty of mapping learning will be reduced. In this paper, we propose a lightweight domain-attention generative adversarial network (LDA-GAN) for unpaired image-to-image translation, which has fewer parameters and lower memory usage. An improved domain-attention module (DAM) is introduced to establish a long-range dependency between two domains. Thus, the generator can focus more on the relevant regions to generate more realistic images. Furthermore, a novel separable-residual block (SRB) is designed to retain depth and spatial information during the translation with a lower computational cost. Extensive experiments show the effectiveness of our model on various image translation tasks according to qualitative and quantitative evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call