Abstract

For infrared and visible image fusion technology, it has always been a challenge to effectively select useful information from the source image and integrate them because the imaging principle of infrared and visible images are widely different. To solve this problem, a novel infrared and visible image fusion algorithm are proposed, which includes the following contributions: (i) an infrared visual saliency extracting scheme using global measurement are presented, (ii) a visible visual saliency measure scheme by a local measurement strategy are proposed, and (iii) a fusion rule based on orthogonal space is designed to combine the extracted saliency maps. Specifically, in order to make humans pay attention to infrared targets, coarse-scale decomposition is performed. Then a global measurement strategy is utilized to get saliency maps. In addition, since visible images have rich textures, fine-scale decomposition can make the visual system pay attention to tiny details. Next, the visual saliency is measured by a local measurement strategy. Different from the general fusion rules, the orthogonal space is constructed to integrate the saliency maps, which can remove the correlation of saliency maps to avoid mutual interference. Experiments on public databases demonstrate that the fusion results of the proposed fusion algorithm are better than other comparison algorithms in qualitative and quantitative assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call