Abstract

Artistic style transfer refers to using two images (content image and style image) as a reference, preserving the content of the content image as much as possible and transferring the style characteristics of the style image to content image. Existing methods usually use various normalization techniques, but these techniques have limitations in completely transferring different textures to different spatial locations. The method based on self-attention has solved this problem and made some progress, but there are also unobvious image textures, resulting in unnecessary artifacts. There are also attempts to add a mask to fix the image layout for style transfer, but it is difficult to produce a coordinated result at the mask boundary. Someone tried to embed the wavelet network in the VGG network to obtain more detailed stylized images, and indeed achieved more visually pleasing results. To solve these problems, this paper attempts to combine the advantages of wavelet transform, self-attention mechanism, whitening and coloring transform WCT (Whiten-Color Transform) in image feature extraction, and propose a new general style transfer method to better weigh the semantic information and style characteristics of content images and style images. Moreover, this paper use the self-attention mechanism to obtain the high-level semantic information of the image to make up for the missing details of the reconstructed image. Compared with the previous methods, the proposed method does not need to train for a certain feature map, and any style map can be used to transfer the style of the content map. Experimental results show that the model has good effects in terms of style transfer and artifact removal, and also prove that the method has good versatility.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call