Abstract

We implement high-quality sketch colorization using two-stage conditional generative adversarial network (GAN) training based on different intermediate features. The intermediate features used in autonomous colorization are the grayscale parsing and interval pixel-level color parsing. The autonomous colorization based on grayscale parsing feature can learn the spacial topology of pixels in the first stage to guide the colorization in the second stage. The autonomous colorization based on pixel-level color parsing feature can learn the color information of few feature points in the first stage to guide the colorization of all pixels in the second stage. Additionally, we use the intermediate feature of sampling points as constraint and achieve the color reconstruction using Laplacian mesh editing as a special second stage. Furthermore, the interactive colorization uses the superpixel color parsing as the intermediate feature. Specifically, we use the simple linear iterative cluster (SLIC) to obtain a palette that maintains the edges in the first stage to guide the colorization in the second stage. As for evaluation metrics, we propose a color-coded local binary pattern (CCLBP) score based on color distances from the first-order 8 pixels to the central pixel, to measure the degrees of color blurring and mess. We also propose a light-sensitivity (LS) score based on the reversed grayscale map, to measure the degrees of auto painting and overfitting of the color hint. According to the L1 distances between the original and generated color images based on these scores, compared with state-of-the-art methods including one stage approaches such as pix2pix and PaintsChainer and two-stage approaches such as Style2Paints and DeepColor, our model can achieve the highest-quality autonomous colorization. Moreover, compared with pix2pix, PaintsChainer and Style2Paints with color hints, according to the proposed objective evaluation as well as the user visual study, our model can achieve the highest-quality interactive colorization as well.

Highlights

  • Similar to the gray-to-image problem [1], [2], the sketch-toimage problem requires the color information to make the generated results vivid

  • 3) We propose two novel objective evaluation criteria based on the multiscale-coded local binary pattern (CCLBP) score and LS score to evaluate the colorization performance

  • Our approach is divided into three parts: autonomous colorization based on a grayscale image, autonomous colorization based on sampling points and interactive colorization based on color clustering

Read more

Summary

Introduction

Similar to the gray-to-image problem [1], [2], the sketch-toimage problem requires the color information to make the generated results vivid. Coloring is an ill-posed problem that requires generating reasonable colors and textures based on a sketch, which is an attractive issue in the field of nonphotorealistic rendering. Traditional energy-based methods such as LazyBrush [3] can only handle some lines with low shape complexity, which makes it prone to generating unnatural color in hair and other detailed parts and causing vacancies and other phenomena. PaintsChainer [7] is the first online project of sketch colorization using a deep learning method. There are three automatic colorization methods including PaintsChainer1 [8], PaintsChainer2 [9] and PaintsChainer3 [10]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call