Abstract

Image style transfer has always been a popular research method in the computer vision community, which aims to learn the style of a given image and the content distribution of other images to generate new images with both the above style and content. Thanks to the rapid development of convolutional neural networks, the accuracy, and visualization of image segmentation and migration have been continuously improved, but there is still the problem of local content distortion. Some recent works introduce the segmentation branch to obtain pixel-level content information in order to obtain a more perfect transfer effect. In this paper, we use the VGG19 convolutional neural network model to extract high-level feature maps representing image content information and perform image style transfer based on the DeepLabV3+ semantic segmentation network, which can move images to the same or similar semantic regions during the entire transmission process. In order to prevent content distortion after image migration, we also introduce an affine function to control the content change of the image during image migration. Extensive experimental results show that the method in this paper can further clarify the segmentation boundary and improve the semantic accuracy of the transferred image. In addition, we also evaluated the expectations of different individuals on the degree of style transfer. By distributing the survey, this paper demonstrates that the method described in this study results in better image quality, more in line with the expectations of the majority of respondents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call