Abstract

The cartoon style transfer has attracted widespread attention. Although many researchers have proposed various methods to develop this field, there are still two areas for improvement: (1) existing image-to-image cartoon style transfer methods can only perform domain-to-domain cartoon style transfer, neglecting the specific color and texture of the cartoon images, and (2) arbitrary style transfer methods only transfer the style of the style image onto the content image, neglecting the style information of the style domain. To address these issues, we observe that artists often refer to specific paintings to fine-tune the color of their artworks. This behavior inspires us to propose a method that can dynamically encode the style information of a specific cartoon image based on Variational Autoencoders, allowing the style feature to be cast onto the content feature dynamically. We also introduce a cartoon contrastive learning loss to push the cartoon stylized image closer to the same cartoon stylized image and otherwise pull away. Extensive experiments demonstrate that our proposed method, Caster, can generate a high-quality stylized image with specific and domain cartoon style information than state-of-the-art cartoon style transfer methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call