Abstract

Infrared and visible image fusion technology provides many benefits for human vision and computer image processing tasks, including enriched useful information and enhanced surveillance capabilities. However, existing fusion algorithms have faced a great challenge to effectively integrate visual features from complex source images. In this paper, we design a novel infrared and visible image fusion algorithm based on visual attention technology, in which a special visual attention system and a feature fusion strategy based on the saliency maps are proposed. Special visual attention system first utilizes the co-occurrence matrix to calculate the image texture complication, which can select a particular modality to compute a saliency map. Moreover, we improved the iterative operator of the original visual attention model (VAM), a fair competition mechanism is designed to ensure that the visual feature in detail regions can be extracted accurately. For the feature fusion strategy, we use the obtained saliency map to combine the visual attention features, and appropriately enhance the tiny features to ensure that the weak targets can be observed. Different from the general fusion algorithm, the proposed algorithm not only preserve the interesting region but also contain rich tiny details, which can improve the visual ability of human and computer. Moreover, experimental results in complicated ambient conditions show that the proposed algorithm in this paper outperforms state-of-the-art algorithms in both qualitative and quantitative evaluations, and this study can extend to the field of other-type image fusion.

Highlights

  • Image fusion is an important branch of information fusion, which involves many research fields such as deep learning, image processing and computer vision [1,2,3]

  • We experimented on the TNO image fusion data set that is a public data set in the field of infrared and visible image fusion and contains many different military relevant scenarios

  • We have extended the proposed fusion algorithms to the field of medical, multi-focus and multi-exposure image fusion

Read more

Summary

Introduction

Image fusion is an important branch of information fusion, which involves many research fields such as deep learning, image processing and computer vision [1,2,3]. Multi-scale transform-based methods (MST) have been widely applied since it was introduced into the field of infrared and visible image fusion such as quaternion wavelet transform (QWT) [13], pyramid transform [14] and so on This kind of method has three fusion steps [15]: first of all, the source image is decomposed into multiple scale, each of which contains different feature information. With the development of computer vision technology, saliency-based methods have been successfully implemented to infrared and visible image fusion because it effectively utilizes the complementary information of the source image. The saliency maps of the infrared and visible image obtained by the special visual attention system are used to integrate complementary information, and the guided filter is utilized to decompose multi-scale information for enhancing weak features.

Feasibility
Superiority
The Original VAM for Image Fusion
Image Fusion Algorithm Based on Visual Attention Technique
The Special Visual Attention System for Extracting Features
Modality Selection Based on Texture Complication Evaluation
Across-Scale Combinations with a Fair Competition Mechanism
Feature Fusion Strategy Based on the Saliency Maps
Experimental Results and Analyses
Qualitative Evaluation
Quantitative Evaluation
Quantitative Metrics
Evaluation Index
Computational Costs
Extension to Other-Type Image Fusion Field
Conclusions
Objective
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.