Abstract

Style transfer is the process of rendering one image with some content in the style of another image, representing the style. Recent studies of Liu et al. (2017) show that traditional style transfer methods of Gatys et al. (2016) and Johnson et al.(2016) fail to reproduce the depth of the content image, which is critical for human perception. They suggest to preserve the depth map by additional regularizer in the optimized loss function, forcing preservation of the depth map. However these traditional methods are either computationally inefficient or require training a separate neural network for each style. AdaIN method of Huang et al. (2017) allows efficient transferring of arbitrary style without training a separate model but is not able to reproduce the depth map of the content image. We propose an extension to this method, allowing depth map preservation by applying variable stylization strength. Qualitative analysis and results of user evaluation study indicate that the proposed method provides better stylizations, compared to the original AdaIN style transfer method.

Highlights

  • IntroductionThe problem of rendering an image (called the content image) in a particular style is known as style transfer and is a long studied problem in computer vision

  • The problem of rendering an image in a particular style is known as style transfer and is a long studied problem in computer vision

  • In 2016 Gatys et al [2] proposed an algorithm of imposing arbitrary style taken from user defined style image on arbitrary content image by using representations of images that could be obtained with deep convolutional networks

Read more

Summary

Introduction

The problem of rendering an image (called the content image) in a particular style is known as style transfer and is a long studied problem in computer vision. In 2016 Gatys et al [2] proposed an algorithm of imposing arbitrary style taken from user defined style image on arbitrary content image by using representations of images that could be obtained with deep convolutional networks Their method needed a computationally expensive optimization in the space of images requiring several minutes of processing a single image of moderate resolution on powerful GPUs. Ulyanov et al [17] and Jonson et al [7] proposed a real-time style transfer algorithm by passing a content image through a pretrained fully convolutional transformer network. To address this issue they extended traditional methods [2] and [7] with a regularizer, forcing preservation of the depth map of the content image This yielded significant improvement of style transfer rendering quality but required computationally complex algorithm, requiring either solving high dimensional optimization problem for each content-style pair or fitting a separate transformer network for each style. Later architectures, such as AdaIN [6] and other ([3], [8]), allowed transferring arbitrary style without training a separate network but lacked rendering quality due to failure to preserve the depth map of the content image

Methods
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.