Abstract

When imaging at a long distance by ordinary visible cameras, the wavelength of visible light is easily interfered by fog or atmospheric effects, resulting in blurry or lost details in RGB images. However, NIR cameras are robust to the long distance imaging without color. In this paper, we propose long-range imaging using multispectral fusion of RGB and NIR images, called LRINet. We adopt unsupervised learning for the fusion to solve the absence of ground truth. LRINet is an end-to-end network based on convolutional neural networks (CNNs) that consists of three steps for the multispectral fusion: feature extraction, fusion and reconstruction. To align unpaired data and treat the discrepancy between RGB and NIR images, we construct WarpingNet to warp NIR features, and add it into the feature extraction. Since dark channel prior (DCP) provides distance from the camera by light transmission degree, we combine it with structure loss as the weight for fusion. To ensure the color fidelity, LRINet operates the fusion on the input of the RGB luma channel and NIR image. Experimental results show that LRINet produces natural-looking color images with clear details by successfully treating discrepancy between RGB and NIR images and outperforms state-of-art fusion models in term of both visual quality and quantitative measurements.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.