Abstract

Thanks to the rapid development of Generative Adversarial Networks (GAN) in recent years, great progress has been made in image translation using GAN. Image-to-image transformation is one of the important applications in the field of computer vision, and its scope includes image inpainting, image colorization, super-resolution and image style transfer. In recent years, there are many classic research on GAN-based image transformation, such as CycleGAN, UNIT, AGGAN, etc. This paper focuses on researching and testing the U-GAT-IT unsupervised image style transfer method. The authors introduced a new attention module and a new learnable normalization function (AdaLIN), which enables flexible control of the amount of change in shape and texture during image conversion. This paper uses a new dataset for testing and verification, trying to achieve bidirectional conversion between real face photos and sketches, and qualitatively and quantitatively analyzes the method by calculating PSNR and SSIM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call