Abstract

Various methods based on deep neural networks have been proposed to generate color images from gray-scale images, meanwhile, Generative adversarial networks (GANs) are also gradually applied to image colorization. However, the existing methods are texture-unaware, resulting in dullish color and color bleeding artifacts in the output images. This paper attempt to integrate a novel texture-aware bistream GAN into the conventional encoder–decoder structure for image colorization. In this study, the proposed bistream feature extraction module (BSFEM) and the feature boosting module (FBM), extract the global and local features from two parallel encoders and fuse them via a novel hybrid attention structure, this novel structure could emphasize the importance of certain channels and locations of features that may potentially benefit image colorization. In addition, the texture colors can be better recovered though the proposed multi-scale feature attention module (MSFAM). The quantitative experiments demonstrate that, compared to the state-of-the-art approaches, the proposed method has improved the PSNR and SSIM metrics by 18% and 8% respectively. Moreover, the qualitative results show that this method is capable of producing visually pleasant color images especially in terms of recovering texture details and eliminating color bleeding along the edges. The source code and data are available online at https://github.com/JarryZang/Image-Colorization-.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call