Abstract
Image fusion obtains a desired image by integrating the useful information of multiple input images. Most traditional fusion strategy is usually guided by image local contrast or variance, which cannot well represent visual discernable features of source images. And the undesirable seam effects or artifacts produced due to the inconsistency between fusion weight map and image content may severely degrade the visual quality of the fused images. An efficient image fusion method with structural saliency measure and content adaptive consistency verification was proposed. The fusion is implemented under the nonsubsampled contourlet transform (NSCT)-based image fusion framework. The low-frequency NSCT decomposition coefficients are fused with the weight map constructed by considering both structural saliency and visual uniqueness features and refined by spatial consistency with guide filter. The high-frequency NSCT decomposition coefficients are fused with structural saliency. The performances of the proposed method have been verified on several pairs of multifocus images, infrared-visible images, and multimodal medical images. Experimental results clearly demonstrate the superiority of the proposed algorithm compared with several existing state-of-the-art algorithms in terms of both visual and quantitative comparison.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.