Abstract

Image style transfer aims at synthesizing an image with the content from one image and the style from another. User studies have revealed that the semantic correspondence between style and content greatly affects subjective perception of style transfer results. While current studies have made great progress in improving the visual quality of stylized images, most methods directly transfer global style statistics without considering semantic alignment. Current semantic style transfer approaches still work in an iterative optimization fashion, which is impractically computationally expensive. Addressing these issues, we introduce a novel dual-affinity style embedding network (DaseNet) to synthesize images with style aligned at semantic region granularity. In the dual-affinity module, feature correlation and semantic correspondence between content and style images are modeled jointly for embedding local style patterns according to semantic distribution. Furthermore, the semantic-weighted style loss and the region-consistency loss are introduced to ensure semantic alignment and content preservation. With the end-to-end network architecture, DaseNet can well balance visual quality and inference efficiency for semantic style transfer. Experimental results on different scene categories have demonstrated the effectiveness of the proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.