Abstract

Deep generative models are effective in style transfer. Previous methods learn one or several specific artist-style from a collection of artworks. These methods not only homogenize the artist-style of different artworks of the same artist but also lack generalization for the unseen artists. To solve these challenges, we propose a double-style transferring module (DSTM). It extracts different artist-style and artwork-style from different artworks (even untrained) and preserves the intrinsic diversity between different artworks of the same artist. DSTM swaps the two styles in the adversarial training and encourages realistic image generation given arbitrary style combinations. However, learning style from single artwork can often cause over-adaption to it, resulting in the introduction of structural features of style image. We further propose an edge enhancing module (EEM) which derives edge information from multi-scale and multi-level features to enhance structural consistency. We broadly evaluate our method across six large-scale benchmark datasets. Empirical results show that our method achieves arbitrary artist-style and artwork-style extraction from a single artwork, and effectively avoids introducing the style image’s structural features. Our method improves the state-of-the-art deception rate from 58.9% to 67.2% and the average FID from 48.74 to 42.83.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.