Abstract
The task of image-to-image translation is to generate images closer to the target domain style while preserving the significant features of the original image. This paper contends an adaptive feature fusion method for unsupervised image translation. The proposed architecture, termed as AFF-UNIT, is based on a compact network structure to further improve the quality of generated images. First of all, a feature extraction module based on an adaptive feature fusion method is proposed, which combines low-level fine-grained information and high-level semantic information to obtain feature maps with richer information. At the same time, a feature-similarity loss is proposed to guide the feature extraction module to extract features that are more conducive to improving the translation result. In addition, AFF-UNIT reuses the feature extraction module in the generator and discriminator to simplify the framework. Extensive experiments on five popular benchmarks demonstrate the superior performance of AFF-UNIT over state-of-the-art methods in terms of FID, KID, IS, and also human preference. Comprehensive ablation studies are also carried out to isolate the validity of each proposed component.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.