Abstract

The essence of image style transfer is to generate images that both maintain in the original content image and present the effect with artistic features under the guidance of style images. Deep learning’s quick rise has resulted in even more achievements in image style transfer, an otherwise popular study area. Nevertheless, due to the limitations of Convolutional Neural Networks (CNN), extracting and retaining the input images is problematic. Therefore, image style transfer based on traditional CNN is biased in the representation of content images. To address the above problems, this paper proposes STLTSF (Style Transfer based on Transformer), a transformer-based method that may achieve image style transfer based on the long-range dependencies of the input images. Compared with traditional visual transformers, STLTSF has two different transformer encoders, one for generating domain-specific content and the other for generating styles. We may convert the encoder to a decoding method that can be styled based on content sequences by using numerous layers of transformers. The suggested STLTSF approach outperforms traditional CNN-based methods in both qualitative and quantitative experiments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.