Abstract

This paper proposes an unsupervised image-to-image (UI2I) translation model, called Perceptual Contrastive Generative Adversarial Network (PCGAN), which can mitigate the distortion problem to enhance performance of the traditional UI2I methods. The PCGAN is designed with a two-stage UI2I model. In the first stage of the PCGAN, it leverages a novel image warping to transform shapes of objects in input (source) images. In the second stage of the PCGAN, the residual prediction is devised in refinements of the outputs of the first stage of the PCGAN. To promote performance of the image warping, a loss function, called Perceptual Patch-Wise InfoNCE, is developed in the PCGAN to effectively memorize the visual correspondences between warped images and refined images. Experimental results on quantitative evaluation and visualization comparison for UI2I benchmarks show that the PCGAN is superior to other existing methods considered here.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.