Abstract
To provide semantic image style transfer results which are consistent with human perception, transferring styles of semantic regions of the style image to their corresponding semantic regions of the content image is necessary. However, when the object categories between the content and style images are not the same, it is difficult to match semantic regions between two images for semantic image style transfer. To solve the semantic matching problem and guide the semantic image style transfer based on matched regions, we propose a novel semantic context-aware image style transfer method by performing semantic context matching followed by a hierarchical local-to-global network architecture. The semantic context matching aims to obtain the corresponding regions between the content and style images by using context correlations of different object categories. Based on the matching results, we retrieve semantic context pairs where each pair is composed of two semantically matched regions from the content and style images. To achieve semantic context-aware style transfer, a hierarchical local-to-global network architecture, which contains two sub-networks including the local context network and the global context network, is proposed. The former focuses on style transfer for each semantic context pair from the style image to the content image, and generates a local style transfer image storing the detailed style feature representations for corresponding semantic regions. The latter aims to derive the stylized image by considering the content, the style, and the intermediate local style transfer images, so that inconsistency between different corresponding semantic regions can be addressed and solved. The experimental results show that the stylized results using our method are more consistent with human perception compared with the state-of-the-art methods.
Highlights
I MAGE style transfer aims to change strokes, textures, and colors of a content image to those of a style image
To solve the visual incoherent problem, we propose a global context network to learn visually consistent transfer results and maintain detailed strokes of styles based on the content image, the style image and the local style transfer image
We propose a novel semantic context matching method to automatically obtain the semantic context pairs between the content and style images
Summary
I MAGE style transfer aims to change strokes, textures, and colors of a content image to those of a style image. For high-quality image style transfer, object boundaries and scene structures of the content image should be preserved while the appearances are required to be aligned with the style image. To this end, matching content and style images is essential to image style transfer. Conventional methods for style transfer apply image- or patch-level deep feature matching between the content and style images, and transfer the learned styles from the latter image to the former one. When content regions and style regions are not correctly matched, e.g., matching a building in the content image to a tree in the style image, semantically incoherent transfer typically makes the resulting stylized image inconsistent with human perception.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.