Abstract

Arbitrary style transfer has broad application prospects and important research values, and it is a research hotspot in the field of computer vision. Many studies have shown that arbitrary style transfer achieves remarkable success. However, existing methods may produce artifacts and sometimes lead to distortion of the content structure. To overcome this limitation, this paper proposes a novel Attention-wise and Covariance-Matching Module (ACMM) that preserves the content structure well without unpleasant artifacts. First, our method uses global attention covariance matching to match the global information of style features with content features, thereby obtaining pleasing stylized images. Then, to enable the model to better match the global statistics, a histogram loss is introduced to improve the saturation and stability of the resulting color. Our method can preserve the content structure; so that appearance transfer can be achieved with simple adjustments to the model. The effectiveness of the proposed method is demonstrated through qualitative and quantitative experiments with comparisons with some state-of-the-art arbitrary style transfer methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call