Abstract

Convolutional Neural Network- (CNN-) based GAN models mainly suffer from problems such as data set limitation and rendering efficiency in the segmentation and rendering of painting art. In order to solve these problems, this paper uses the improved cycle generative adversarial network (CycleGAN) to render the current image style. This method replaces the deep residual network (ResNet) of the original network generator with a dense connected convolutional network (DenseNet) and uses the perceptual loss function for adversarial training. The painting art style rendering system built in this paper is based on perceptual adversarial network (PAN) for the improved CycleGAN that suppresses the limitation of the network model on paired samples. The proposed method also improves the quality of the image generated by the artistic style of painting and further improves the stability and speeds up the network convergence speed. Experiments were conducted on the painting art style rendering system based on the proposed model. Experimental results have shown that the image style rendering method based on the perceptual adversarial error to improve the CycleGAN + PAN model can achieve better results. The PSNR value of the generated image is increased by 6.27% on average, and the SSIM values are all increased by about 10%. Therefore, the improved CycleGAN + PAN image painting art style rendering method produces better painting art style images, which has strong application value.

Highlights

  • In recent years, deep learning has been widely used in many fields such as medical imaging [1], remote sensing [2], and three-dimensional modeling [3] and has played an important role in promoting the application of artificial intelligence in multiple industries

  • E innovation of perceptual adversarial network (PAN) is that there is no longer a need for a complex loss function constructed based on human experience like traditional image models. is method automatically learns the mapping from input to output pictures through the adversarial network and applies it to the image conversion problem to achieve a generalization model. e PAN model is based on generative adversarial network (GAN) model, which is combined with perceptual loss for adversarial training, and enhances the naturalness and realism of the image. e PAN model can realize a variety of image conversion tasks, such as image super-resolution, denoising, semantic segmentation, automatic completion, etc. erefore, in this paper, we used the PAN model to improve the performance and efficiency of the CycleGAN model for rendering painting art style images

  • In order to verify the feasibility and effectiveness of the rendering model proposed in this paper, the objective functions of CycleGAN, CycleGAN + PAN, and the improved CycleGAN + PAN are used in the experiment to perform image painting art style rendering experiments. e experimental results are shown in Figure 7. e first column in Figure 7 is the original image, the second column is the style image, and the third, fourth, and fifth columns are, respectively, experimental results of image style rendering for the CycleGAN model, CycleGAN + PAN model, and the improved CycleGAN + PAN model

Read more

Summary

Introduction

Deep learning has been widely used in many fields such as medical imaging [1], remote sensing [2], and three-dimensional modeling [3] and has played an important role in promoting the application of artificial intelligence in multiple industries. E study in [13] proposed a perceptual adversarial network (PAN) model that combines perceptual loss and GAN model and realized a variety of image style conversion applications. E proposed method combines perceptual loss, content loss, and style loss into a new perceptual loss function, and the loss network and the image style conversion network can be alternately updated, thereby replacing the fixed loss network [16] and at the same time improving the original generator network structure. E experimental results prove that the proposed method can enhance the background definition of the image, make it closer to the original image in content and style, and at the same time increase the convergence speed, and the generated style rendering effect is more realistic.

Related Works
Improved Image Style Rendering Network Structure
Simulation Experiment and Result Analysis
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call