Abstract

Redrawing an image in a given style requires a professional artist, but Neural Style Transfer (NST) opens a way to generate art in the desired style using computers. NST uses a Convolution Neural Network (CNN) to draw a content image in a given artistic style. It showed the potential of CNN in producing painterly images by extracting and blending image style and content. Since then, various methods are introduced to enhance and extend the fundamental NST approach, becoming an interesting topic to research. Generative Adversarial Networks (GANs) were recently introduced, which have shown excellent potential in image-to-image translation. The purpose of image-to-image translation is similar to the style transfer task, and this new GAN-based approach extended the boundaries of style transfer from artistic applications to real-life use cases. This paper will first deliver a comprehensive summary of CNN-based and GAN-based state-of-the-art style transfer algorithms and then do a qualitatively comparative study of these algorithms. Finally, this review concludes with several applications of style transfer and identifies some research gaps for future research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.