Abstract

For infrared and visible image fusion technology, it has always been a challenge to effectively select useful information from the source image and integrate them because the imaging principle of infrared and visible images are widely different. To solve this problem, a novel infrared and visible image fusion algorithm are proposed, which includes the following contributions: (i) an infrared visual saliency extracting scheme using global measurement are presented, (ii) a visible visual saliency measure scheme by a local measurement strategy are proposed, and (iii) a fusion rule based on orthogonal space is designed to combine the extracted saliency maps. Specifically, in order to make humans pay attention to infrared targets, coarse-scale decomposition is performed. Then a global measurement strategy is utilized to get saliency maps. In addition, since visible images have rich textures, fine-scale decomposition can make the visual system pay attention to tiny details. Next, the visual saliency is measured by a local measurement strategy. Different from the general fusion rules, the orthogonal space is constructed to integrate the saliency maps, which can remove the correlation of saliency maps to avoid mutual interference. Experiments on public databases demonstrate that the fusion results of the proposed fusion algorithm are better than other comparison algorithms in qualitative and quantitative assessment.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.