Recently, contrastive learning has been proven to be powerful in cross-domain feature learning and has been widely used in image translation tasks. However, these methods often overlook the differences between positive and negative samples regarding model optimization ability and treat them equally. This weakens the feature representation ability of the generative models. In this paper, we propose a novel image translation model based on asymmetric slack contrastive learning. We design a new contrastive loss asymmetrically by introducing a slack adjustment factor. Theoretical analysis shows that it can adaptively optimize and adjust according to different positive and negative samples and significantly improve optimization efficiency. In addition, to better preserve local structural relationships during image translation, we constructed a regional differential structural consistency correction block using differential vectors. Comparative experiments were conducted using seven existing methods on five datasets. The results indicate that our method can maintain structural consistency between cross-domain images at a deeper level. Furthermore, it is more effective in establishing real image-domain mapping relations, resulting in higher-quality images being generated.
Read full abstract