Abstract

Changes in illumination conditions of the imaging scene cause changes in the relative amount of complementary information between the infrared and visible images. Therefore, the fusion method should adjust to changing illumination conditions to adaptively fuse as much source images’ information as possible. However, most existing fusion methods do not consider the changes in the illumination conditions in the construction process, resulting in the loss of detail information and low contrast in fused images. We propose an illumination-dependent adaptive fusion (IDA Fusion) method to solve this problem. First, to process detail information of different scales and brightness information separately, the multi-scale rolling guidance filter (RGF) is chosen to decompose source images into small-scale detail layers, large-scale detail layers, and base layers. Second, according to the respective characteristics of small-scale and large-scale details under different illumination conditions, two different illumination-dependent rules are designed to combine the small-scale and large-scale detail layers, respectively. These rules adjust their forms and parameters according to the average pixel value and entropy ratio to adaptively transfer the detail information from source to fused images. Moreover, for combining the base layers, a rule based on weighted least squares (WLS) minimization is proposed to keep as much source images’ information as possible while maintaining an appropriate brightness under different illumination conditions. Experimental results validate the effectiveness of the above rules and demonstrate that our method performs better than some state-of-the-art methods, including information retention and contrast improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call