Abstract

Visible (VS) and near infra-red (NIR) band sensors provide digital images that capture complementary spectral radiations from a scene. Since NIR radiations propagate well through haze, mist, or fog, the captured NIR image contains better scene details compared to the VS image in such cases. However, NIR radiations are material dependent and provide little information about color or texture of the scene's objects. To exploit the complementary details provided by VS and NIR images, we propose a fusion approach that adaptively injects missing spatial details to the VS image from the NIR image while preserving the spectral contents of the VS image. The spatial details are adaptively weighted based on the relative difference between local contrasts of the NIR and the VS images. Thus, the proposed approach prevents unnecessary modification of colors or amplification of scene details that result in an unrealistic fused image. Moreover, the proposed approach is non-iterative, fast with a low complexity of O(n), and suitable to be implemented on embedded cameras' hardware. Experimental fusion results obtained on natural NIR and VS image pairs show the effectiveness of the proposed approach compared with two alternatives.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call