Abstract

There are various challenging issues in automating line art colorization. In this paper, we propose a GAN approach incorporating semantic segmentation image data. Our GAN-based method, named Seg2pix, can automatically generate high quality colorized images, aiming at computerizing one of the most tedious and repetitive jobs performed by coloring workers in the webtoon industry. The network structure of Seg2pix is mostly a modification of the architecture of Pix2pix, which is a convolution-based generative adversarial network for image-to-image translation. Through this method, we can generate high quality colorized images of a particular character with only a few training data. Seg2pix is designed to reproduce a segmented image, which becomes the suggestion data for line art colorization. The segmented image is automatically generated through a generative network with a line art image and a segmentation ground truth. In the next step, this generative network creates a colorized image from the line art and segmented image, which is generated from the former step of the generative network. To summarize, only one line art image is required for testing the generative model, and an original colorized image and segmented image are additionally required as the ground truth for training the model. These generations of the segmented image and colorized image proceed by an end-to-end method sharing the same loss functions. By using this method, we produce better qualitative results for automatic colorization of a particular character’s line art. This improvement can also be measured by quantitative results with Learned Perceptual Image Patch Similarity (LPIPS) comparison. We believe this may help artists exercise their creative expertise mainly in the area where computerization is not yet capable.

Highlights

  • Line art colorization is an expensive and labor-intensive process especially in the animation and comics industry due to the repetitive tasks of the job

  • Many challenges and improvements for automating line art colorization have appeared, and most of the studies have been based on Generative Adversarial Networks (GANs) [1]

  • Pix2pix [2], a modified method of GANs, which can generate images based on an input vector, has motivated research into automatic line art colorization

Read more

Summary

Introduction

Line art colorization is an expensive and labor-intensive process especially in the animation and comics industry due to the repetitive tasks of the job. The two methods of [6,7] provided the desired style for colorization using the data of reference images Even though these methods generate a fully colorized image for the input line art image, their quality and details are not sufficient for application in the webtoon industry. Our goal in this paper is to remove such colorization mistakes in the mass production of webtoons or animations and to reduce the amount of time spent on the whole process of line art coloring To accomplish this goal, we had to find a method to colorize a particular character with high-quality color matches, rather than colorizing random characters with hardly matching colors. To produce as many line art drawings as possible, we applied basic edge filters like Sobel [10] and Canny [11], performed by morphological operations and the open-source neural network model called SketchKeras [12]

GAN for Colorization
Pix2pix
Sketch Parsing
Webtoon Dataset for Seg2pix
Workflow Overview and Details
Trap Ball Segmentation
Segmentation
Result
Colorize
Attention Layer
Experiments
Comparisons
Analysis of the Results
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.