Abstract

Unpaired image-to-image translation (I2IT) involves establishing an effective mapping between the source and target domains to enable cross-domain image transformation. Previous contrastive learning methods inadequately accounted for the variations in features between two domains and the interrelatedness of elements within the features. Consequently, this can result in challenges encompassing model instability and the blurring of image edge features. To this end, we propose a multi-attention bidirectional contrastive learning method for unpaired I2IT, referred to as MabCUT. We design separate embedding blocks for each domain based on depthwise separable convolutions and train them simultaneously from both the source and target domains. Then we utilize a pixel-level multi-attention extractor to query images from embedding blocks in order to select feature blocks with crucial information, thus preserving essential features from the source domain. To enhance the feature representation capability of the model, we incorporate depthwise separable convolutions for the generator. We conducted comprehensive evaluations using three datasets, demonstrating that our approach enhances the quality of unpaired I2IT while avoiding the issue of mode collapse-related image blurring.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.