Abstract

When imaging at a long distance by ordinary visible cameras, the wavelength of visible light is easily interfered by fog or atmospheric effects, resulting in blurry or lost details in RGB images. However, NIR cameras are robust to the long distance imaging without color. In this paper, we propose long-range imaging using multispectral fusion of RGB and NIR images, called LRINet. We adopt unsupervised learning for the fusion to solve the absence of ground truth. LRINet is an end-to-end network based on convolutional neural networks (CNNs) that consists of three steps for the multispectral fusion: feature extraction, fusion and reconstruction. To align unpaired data and treat the discrepancy between RGB and NIR images, we construct WarpingNet to warp NIR features, and add it into the feature extraction. Since dark channel prior (DCP) provides distance from the camera by light transmission degree, we combine it with structure loss as the weight for fusion. To ensure the color fidelity, LRINet operates the fusion on the input of the RGB luma channel and NIR image. Experimental results show that LRINet produces natural-looking color images with clear details by successfully treating discrepancy between RGB and NIR images and outperforms state-of-art fusion models in term of both visual quality and quantitative measurements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call