Abstract

Transforming near-infrared(NIR) images into realistic RGB images is a challenging task. Recently, with the development of deep learning, the colorization of NIR images has been significantly improved. However, there are still problems with distorted textures and blurred details. The main reason is that NIR images require color prediction and gray-scale detail recovery to be colorized. Moreover, NIR images contain limited detail and lack color information, which poses a serious challenge for feature extraction. To address the problems, we propose the Dense Residual Module and Dual-stream Attention-Guided Generative Adversarial Network (DDGAN) in this paper. The Dense Residual Module (DRM) improves the network feature extraction capability in the form of dense residuals and increases the network depth to improve the adaptability of the network. The Dual-stream Attention Module (DAM) further improves the quality of colorized images by enhancing important features and suppressing unnecessary features to focus on essential visual features. We propose a composite loss function consisting of content loss, adversarial loss, perceptual loss, synthesized loss, and total variation loss, which improves the quality of colorful images in terms of both edge structure and visual perception. We consider the NIR images for the visible image conversion task to evaluate the efficiency and performance of the proposed model. The proposed DDGAN outperforms most existing methods in terms of efficiency and quality of the generated images on the RGB–NIR dataset and the OMSIV dataset. Compared to state-of-the-art methods, the proposed DDGAN shows promising results with significant improvements in PSNR, SSIM, MSE, and NRMSE. Extensive experimental data show that the proposed DDGAN can produce state-of-the-art colorized NIR images in objective metrics and subjective quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call