Abstract
While the recently reported deep photo style transfer [1] has shown improved results in photographic style transfer, it is found sensitive to spatial differences in the semantic segmentation of the inputs when applied to head portraits. The stylized image could incur ghost shadows when the segmented regions between the input image and the style reference image are spatially different. To minimize such a risk and reduce the influence of spatial differences between the input image and the style image, we introduce a spatial transformation strategy in this paper before style transfer. By maximizing the normalized cross-correlation of the feature maps, we propose to apply a series of affine transformations to the reference image and use those spatially-transformed images as the style reference to achieve a robust style transfer, when the semantically segmented regions of both the input image and the reference image are used as inputs to the pre-trained convolutional neural network. Consequently, the incurred ghost shadows can be minimized or eliminated. Experiments show that the proposed can perform well even when the semantic segmentation of the two images have large spatial differences, achieving significant level of robustness compared with the existing benchmark [1].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.