Abstract

In recent years, image style transfers have increasingly become a hot research topic in the field of computer vision. The existing CNN-based style transfer methods have been the basis of a substantial amount of research in the content–style fusion area. Artificially controlled style fusions are applied by deterministic computations. Since these methods limit the automatic learning ability of the model, their style transfer effect is unstable. To solve this problem, we propose a non-definitive style auto-transfer module. This module is based on an attention submodule that guides the model in regard to the channels and the space for content–style fusions. Instead of artificially defining the content–style fusion methods, it lets the model learn on its own. We also propose a feature shuffle operation, which reduces the influence of the style image on the content of the result. In addition, to better preserve the high-level and low-level information of the image, our loss function adopts a multi-scale content–style loss and an edge detection loss. All our experiments are conducted on the WikiArt and Microsoft COCO datasets. The experimental results show that our method can achieve more stable and better visual effects than the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call