Abstract

Fusion involves merging details from both infrared (IR) and visible images to generate a unified composite image that offers richer and more valuable information than either individual image. Surveillance, navigation, remote sensing, and military applications require various imaging modalities, including visible and IR, to oversee specific scenes. These sensors provide supplementary data and improve situational understanding, so it is essential to fuse this information into a single image. Fusing IR and visible images presents several challenges due to the differences in imaging modalities, data characteristics, and the need for accurate and meaningful integration of information. In this context, a novel image fusion architecture focuses on enhancing prominent targets, with the objective of integrating thermal information from infrared images into visible images while preserving textural details within the visible images. Initially, in the proposed algorithm, the images from different sensors are divided into components of high and low frequencies using a Guided filter and an Average filter, respectively. A unique contrast detection mechanism is proposed that is capable of preserving the contrast information from the original images. Further, the contrast details of the IR and visible images are enhanced using local standard deviation filtering and local range filtering, respectively. We have developed a new weight map construction strategy that can effectively preserve the supplemental data from both the original images. These weights and gradient details of the source images are utilized to preserve the salient feature details of the images acquired from the various modalities. A decision-making approach is utilized among the high-frequency components of the original images to retain the prominent feature details of the source images. Finally, the salient feature details and the prominent feature details are integrated to generate the fused image. The developed technique is validated using both subjective and quantitative perspectives. The developed approaches provide EN, MI, Nabf, and SD of 6.86815, 13.73269, 0.15390, and 78.16158 respectively against deep learning-based approaches. Also, the proposed algorithm provides EN, MI, Nabf, FMIw, and Qabf against 6.86815, 13.73269, 0.15390, 0.41634 and 0.47196 respectively against existing traditional fusion methods. It is observed that the developed technique provides adequate accuracy against twenty-seven state-of-the-art techniques.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.