Abstract

Style transfer is a computer vision task that attempts to transfer the style of an artistic image to a content image. Thanks to the advance in Deep Convolutional Neural Networks, exciting style transfer results has been achieved, but traditional algorithms do not fully understand semantic information. Those algorithms are not aware of which regions in the style image have to be transferred to which regions in the content image. A common failure case is style transfer involving landscape images. After stylization, the textures and colors of the land are often found in incoherent places such as in the river or in the sky. In this work, we investigate semantic style transfer for content images with more than 2 semantic regions. We combine guided Gram matrices with gradient capping and multi-scale representations. Our approach simplifies the parameter tuning problem, improves the style transfer results and is faster than current semantic methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call