Abstract

This paper focuses on the colorization problem of near-infrared (NIR) images. Traditional colorization methods of grayscale images usually depend on users’ intervention and cannot be extended to NIR image colorization due to inherent complexity, such as the same near-infrared lights emitted by objects with different colors. Furthermore, a large number of paired and labeled images, which cannot be guaranteed for the addressed problem, need to be provided during the training phase, whether for some traditional reference-based coloring methods or for CNN-based automatic coloring ones. Benefiting from the advantages of deep learning and generative adversarial networks (GANs) in the image-to-image translation, an improved DualGAN architecture is constructed to deal with the investigated problem. The developed architecture contains four blocks and any two adjacent blocks exist a direct connection channel, where convolution layers in each block enclose batch normalization and leaky ReLU nonlinearities. The adoption of dual deep learning networks is to establish the conversion translation relationship between NIR images and RGB images without paired and labeled requirements. Besides, a mixed loss function by integrating generator loss for discriminators’ training is designed to decrease the occurrence of incorrect images generated by generators. Finally, an intensive comparison analysis based on common data sets is conducted to verify superiority over leading-edge methods in qualitative and quantitative visual assessments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call