Abstract

Texture fusion is the process of applying the style of one style image to another content image. It is an artistic creation and image editing technology. In recent years, the rapid development of deep learning has injected new power into the field of computer vision, and a large number of image style transfer algorithms based on deep learning have been proposed. At the same time, some current character conversion algorithms based on unsupervised learning also face the loss of the content and structure of the generated characters, and at the same time, they have not learned a good face deformation effect, resulting in poor image generation effects. This paper studies the relevant research background and research significance of image style transfer methods and summarizes them in the time sequence of their development; summarizes the style transfer algorithms based on deep learning, and analyzes the advantages and disadvantages of each type of algorithm. Based on the fast style transfer algorithm, this method adds a saliency detection network and designs a saliency loss function. In the training process, the difference between the saliency map of the generated image and the content image is additionally calculated, and the saliency loss is used as a part of the total loss for iterative training. Experiments show that the stylized image generated by this algorithm can better retain the salient area of the content image and has a good visual effect. Compared with the original network, the amount of parameters of the attention mechanism is very small, and there is almost no additional burden.

Highlights

  • Since the digital image coding method was proposed, images can be converted into discrete pixel data and stored in digital storage devices [1]. e advancement of digital storage has greatly promoted the development of digital image processing technology

  • The semantic information of the content image is lost. e perceptual loss function is proposed, which retains the structure of the content image to a certain extent, but it retains all the features in the content image, so that the stylized image has no sense of hierarchy. e style transfer of any style is realized, and the generated stylized image has a relatively obvious “grid” phenomenon, resulting in unsatisfactory visual effects. e proposed image style transfer algorithm with salient area preservation can better preserve salient areas of content images in stylized images, and the salient areas are obviously different from the background and have better visual effects

  • From the comparison of the saliency map of the stylized image, it can be seen that the saliency map of the stylized image generated in this chapter is the most consistent with the saliency map of the content image, which means that the stylized image generated by the algorithm in this chapter can well retain the content image while changing the style. e prominent area of the image enhances the visual effect of the style image

Read more

Summary

Introduction

Since the digital image coding method was proposed, images can be converted into discrete pixel data and stored in digital storage devices [1]. e advancement of digital storage has greatly promoted the development of digital image processing technology. Classical style transfer algorithms mostly extract image features at a global level, and the quality of the generation effect of style transfer is low. In 2019, they used wavelet transform instead of SVD decomposition to propose a new network called PhotoWCT, which transfers the style of real images at the pixel level and greatly improves the quality of image generation after style transfer [7]. E reference image features extracted by the VGG model were directly integrated into the original image in the manner of normalization parameters to achieve the effect of rapid style transfer. E network can learn multidimensional makeup features and transfer them to the target image to further improve the quality of makeup migration. From the perspective of improving the quality of stylized images, an image style transfer algorithm with salient area preservation is proposed. en, from the perspective of improving the efficiency of style transfer, a lightweight image style transfer algorithm with attention mechanism is proposed

Image Reconstruction of Character Style
Character Face Deconstruction
Database Construction
Saliency Evaluation of Style Transfer
Analysis of Experimental Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call