Abstract
Arbitrary style transfer is attracting increasing attention in the computer vision community due to its application flexibility. Existing approaches directly fuse deep style features with deep content features or adaptively normalize content features for global statistical matching. Although effective, it is prone to suffer from local unnatural outputs and artifacts owing to the lack of exploring the global contextual semantic distribution of style image features. In this paper, a novel global context self-attentional network (GCSANet) is proposed to efficiently generate high-quality stylized results based on the global semantic spatial distributions of style images. First, a context modeling module is proposed to aggregate the depth features of style images into global context features. Then, channel-wise interdependencies are captured with the feature transform module. Finally, the style features are appropriately aggregated to each location of the content image. In addition, novel external contrastive losses are proposed to balance the distribution of content and style features to ensure the reasonableness of the texture patterns in the stylized images. The ablation studies validate the effectiveness of the proposed components. Various quantitative and qualitative experiments demonstrate the superiority of our method for real-time arbitrary image/video style transfer.
Full Text
Topics from this Paper
Arbitrary Style Transfer
Content Features
Global Context Features
Style Features
Style Transfer
+ Show 5 more
Create a personalized feed of these topics
Get StartedTalk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Proceedings of the AAAI Conference on Artificial Intelligence
Apr 3, 2020
Journal of Computer-Aided Design & Computer Graphics
Jun 1, 2022
Oct 1, 2021
Jul 26, 2021
ACM Transactions on Graphics
Jun 20, 2023
Proceedings of the AAAI Conference on Artificial Intelligence
Jun 26, 2023
Jan 1, 2023
IEEE transactions on neural networks and learning systems
Jan 1, 2023
Jan 1, 2014
IEEE Transactions on Multimedia
IEEE Transactions on Multimedia
Jan 1, 2023
IEEE Transactions on Multimedia
Jan 1, 2023
IEEE Transactions on Multimedia
Jan 1, 2023
IEEE Transactions on Multimedia
Jan 1, 2023
IEEE Transactions on Multimedia
Jan 1, 2023
IEEE Transactions on Multimedia
Jan 1, 2023
IEEE Transactions on Multimedia
Jan 1, 2023
IEEE Transactions on Multimedia
Jan 1, 2023
IEEE Transactions on Multimedia
Jan 1, 2023
IEEE Transactions on Multimedia
Jan 1, 2023