Abstract
An adaptive joint sparsity model (JSM) is presented for multimodal image fusion. As a multisignal modeling technique, JSM, which is derived from distributed compressed sensing, has been successfully employed in multimodal image fusion. In traditional JSM-based fusion, a single dictionary learned by K-singular value decomposition (SVD) has higher coherence yet may result in potential visual confusion and misleading. In the proposed model, we first learn a plurality of subdictionaries and use a supervised classification approach based on gradient information. Then, one of the learned subdictionaries is adaptively applied to JSM to obtain the common and innovative sparse coefficients.. Finally, the fused image is reconstructed by the fused sparse coefficients and the adaptive dictionary. Infrared-visible images and medical images were selected to test the proposed approach. The results were compared with those of traditional methods, such as the multiscale transform-based methods, JSM-based method, and adaptive sparse representation (ASR) model-based method. Experimental results on multimodal images demonstrate that the proposed fusion method can obtain better performance than the conventional JSM-based method and ASR-based method in terms of both visual quality and objective assessment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.