Abstract

It has always been a challenge for drawing novices to start learning from a zero proficiency level, oftentimes the process itself can discourage them from continuing. Imagine if there was a method that could automatically transform line shapes into arbitrary styles, it would be a time-saving wonder. In this paper, based on the machine translation method of Sequence to Sequence Learning model, we approximately regard the lines in the image as “words”, and the long lines as “sentences”, using data extracted from paired images to train. Our model extracts line features, and then transfers the features to the lines that formed the input images to generate the output images, which could be understood as an emulation of the machine translation process between two languages. The performance of our model has achieved promising results, attesting that our model has an excellent performance when it comes to the line style transfer using Sequence to Sequence Learning. Our method can be used as a supplement to the GAN model and expand the application and research of image style transfer.

Highlights

  • INTRODUCTIONNs [1], [2], [5], [6] are being applied to various fields of image processing, such as image style transfer, superresolution, denoising, and line color

  • The main reason for this challenge is that the current GAN model extracts the texture features of the original image instead of the line features [24] when it performs a style transfer

  • In this work, we present a paradigm innovation by regarding an image as a ‘‘paragraph,’’ a line segment as a ‘‘sentence,’’ while shorter line segments—which compose the line segment—as ‘‘words,’’ and a method we developed for sequence to sequence learning to learn about line features and to transfer them

Read more

Summary

INTRODUCTION

Ns [1], [2], [5], [6] are being applied to various fields of image processing, such as image style transfer, superresolution, denoising, and line color. We can compare the method of extracting features from lines in the image with the method of extracting features from words in the text, and regard the lines in the image as long line segments composed of many relatively shorter line segments This way, we can regard every short line—from a long line that is a part of the image—as a ‘‘word,’’ and the long line as a ‘‘sentence’’ composed of words. This paper proves that when we have a means to transform a sketch into ‘‘text’’ which records all the line structures, all the information from the lines that were extracted from the sketch can be very similar to the text in the form of data, and the text translation principle can be applied to transfer the sketch style

RELATED WORK
EXPERIMENTS
TRAINING RESULTS
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call