Abstract
It is well known that the quality of visible images taken under different illumination conditions varies greatly, so the illumination factors will affect infrared and visible image fusion effects inevitably. This paper proposes an infrared and visible image fusion algorithm that satisfies poor illumination conditions. The algorithm is divided into three stages: image preprocessing, multiscale decomposition and image fusion. The purpose of image preprocessing is to improve the contrast of visible image and extract the visual salient regions of infrared image. In the multiscale analysis stage, infrared and visible images are decomposed into different scales by combining Gaussian transform and Rolling Guidance Filter, which effectively avoids the halo artifacts. In the image fusion stage, the fusion weights of base layer coefficients are determined based on the combination of the saliency map of infrared image and the Illumination Effective Region Map of visible image. The detail layer coefficients of images are fused by choose-max fusion rule based on the local variance of detail features of the original images. Experiments show that the fusion effects of the proposed algorithm are robust to illumination variations, and the fused images have good detail clarity and can preserve the effective information of the original images well in an unsatisfactory illumination condition.
Highlights
The imaging principles of infrared sensors and visible sensors are quite different
In order to solve the above problems, this paper proposes a fusion algorithm of infrared and visible images, The associate editor coordinating the review of this manuscript and approving it for publication was Qilian Liang
In [6], Gaussian Filter is combined with Bilateral Filter to decompose the image, and the fusion rules are formulated for the image components of different scales, and a good fusion effect is obtained
Summary
The imaging principles of infrared sensors and visible sensors are quite different. Infrared sensors capture the infrared radiation emitted by the object in the scene. The sensors need to receive moderate amount of light to get a clear image, so they have greater dependence on environmental factors, that means visible imaging sensors cannot work normally under poor illumination conditions. Some researchers have proposed the method of illumination estimation to fuse a group of differently exposed visible images in a High Dynamic Range (HDR) scene [1]. Wang et al [4] proposed a Retinex-like algorithm based on Bilateral Filter to improve the image quality of visible images under poor visual conditions. In [6], Gaussian Filter is combined with Bilateral Filter to decompose the image, and the fusion rules are formulated for the image components of different scales, and a good fusion effect is obtained. The contrast of the fused image is clear and consistent with human visual perception
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.