Abstract

Despite the great success of deep neural networks for style transfer tasks, the entanglement of content and style in images leads to more style information not being captured. To tackle this problem, a novel style disentanglement network is proposed to transfer multi-source style elements. Specifically, we specialize in designing a learnable content style separation module, which can efficiently extract content and style components from images in the latent space. This method differs from the previous approaches by predefining content and style layers in the network. Under the condition of content and style separation, we continue to propose the multi-style swap module, which allows the content image to match more style elements. Additionally, by introducing alternate training strategies for the main and auxiliary decoders as well as style disentanglement loss, the stylized results look very similar to the original artworks. Experimental results demonstrate the superiority of our proposed method compared with existing schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call