Abstract

The transfer of artistic style is a major and demanding task in computer vision. Compared with western paintings, Chinese ink wash paintings have unique characteristics that prevent existing methods from yielding satisfactory results, such as voids, brush strokes, and ink wash tone and diffusion. The main problems include: 1) the generator does not concentrate on key global features; 2) the generated paintings lose the color of the original content image; 3) the generated paintings do not have bright edges and smooth shading. In this paper, we propose Chip-SAGAN, an innovative approach to transforming real-world pictures into Chinese ink wash paintings, which is trained with unpaired photos and Chinese ink wash paintings. We introduce a self-attention module into the generator to capture global dependencies between features. Furthermore, we introduce an edge-promoting adversarial loss and a color reconstruction loss to ensure that the generated painting matches the content image’s edges and colors. The experimental results show that our method can transform real-world pictures into high-quality Chinese ink wash paintings, and surpass state-of-the-art algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.