Abstract

Generative adversarial network (GAN) is a deep learning model that is widely applied to image generation, semantic segmentation, superresolution tasks, and so on. CycleGAN is a new model architecture that is used for various applications in image translation. This paper mainly focuses on the CycleGAN algorithm model. To improve the network model’s capacity of extracting image features, the generator model uses the neural network of Unet, which consists of eight down-sampling and eight up-sampling layers, to extract image features. We use the Markov discriminator of PatchGAN since it has high-resolution and high-detail characteristics in the image style transfer. In order to improve the running efficiency, the depthwise separable convolution and standard convolution are combined in the Markov discriminator. The experimental results show that it can effectively shorten the running time. Then, we compare the image with the generative image that uses the L1 loss function, the L2 loss function, and the smooth L1 loss function. The experimental results show that the CycleGAN neural network can effectively complete the image style transfer. The L1 loss model can well retain the details of the original image. The L2 loss model is clear in the distant part of the natural photo generated by Monet painting, and the color tone is more similar to the original image. The image generated by the smoothL1 loss model is smoother. The L1 loss model and the smooth L1 loss model have some miscoloring in the natural photos generated by the Monet painting. In general, the L2 loss model is more stable, and the generative image is better.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.