Abstract

Image style transfer aims at synthesizing an image with the content from one image and the style from another. User studies have revealed that the semantic correspondence between style and content greatly affects subjective perception of style transfer results. While current studies have made great progress in improving the visual quality of stylized images, most methods directly transfer global style statistics without considering semantic alignment. Current semantic style transfer approaches still work in an iterative optimization fashion, which is impractically computationally expensive. Addressing these issues, we introduce a novel dual-affinity style embedding network (DaseNet) to synthesize images with style aligned at semantic region granularity. In the dual-affinity module, feature correlation and semantic correspondence between content and style images are modeled jointly for embedding local style patterns according to semantic distribution. Furthermore, the semantic-weighted style loss and the region-consistency loss are introduced to ensure semantic alignment and content preservation. With the end-to-end network architecture, DaseNet can well balance visual quality and inference efficiency for semantic style transfer. Experimental results on different scene categories have demonstrated the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call