Abstract

Near-infrared (NIR) band sensors capture digital images of scenes under special conditions such as haze, fog, overwhelming light or mist, where visible (VS) band sensors get occluded. However, the NIR images contain poor textures and colors of different objects in the scene, on the contrary to the VS images. In this article, we propose a simple yet effective fusion approach that combines both VS and NIR images to produce an enhanced fused image that contains better scene details and similar colors to the VS image. The proposed approach first estimates a fusion map from the relative difference of local contrasts of the VS and NIR images. Then, the approach extracts non-spectral spatial details from the NIR image and finally, the extracted details are weighted according to the fusion map and injected into the VS image to produce the enhanced fused image. The proposed approach adaptively transfers the useful details from the NIR image that contributes to the enhancement of the fused image. It produces realistic fused images by preserving the colors of the VS image and constitutes simple and non-iterative calculations with $\mathcal {O}(n)$ complexity. The effectiveness of the proposed approach is experimentally verified by comparisons to four different state-of-the-art VS-NIR fusion approaches in terms of computational complexity and quality of the obtained enhanced fused images. The quality is evaluated using two-color distortion measures and a novel aggregation of several blind image quality assessment measures. The proposed approach shows superior performance as it produces enhanced fused images and preserves the quality even when the NIR images suffer from loss of texture or blurriness degradations, with acceptable fast execution time. Source code of the proposed approach is available online.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call