Abstract

The multi-modal image fusion plays an important role in various fields. In this paper, a novel multi-modal image fusion method based on robust principal component analysis (RPCA) is proposed, which consists of low-rank components fusion and sparse components fusion. In the low-rank components fusion part, a universal low-rank dictionary is constructed for sparse representation (SR) and the low-rank fusion is converted to sparse coefficients fusion by adopting the batch-OMP. In the sparse components fusion part, the anisotropic weight map is constructed to express salient structures of the images. Moreover, a multi-scale anisotropic guided measure is proposed to guide the fusion process, which can extract and preserve the scale-aware salient details of sparse components. Finally, the multi-modal fusion can be achieved by combining two fusion parts together. The experimental results validate that the proposed method outperforms nine state-of-the-art methods in multi-modal fusion both at gray-gray and gray-color scales, in terms of qualitative and quantitative evaluations.

Highlights

  • With the development of the modern technology, the requirement for the completeness of the information acquisition is increasing, so the multi-sensors play an important role in many fields

  • Current image fusion methods are mainly divided into spatial domain based and transform domain based according to their processing domain [6], [7]

  • The proposed method is compared with nine image fusion methods, i.e., the morphological difference pyramid (MDP) based method [43], the gradient pyramid (GP) based method [44], the sparse representation (SR) based method [22], the dual-tree complex wavelet transform (DTCWT) based method [19], the nonsubsampled contourlet transform (NSCT) based method [20], the NSCT pulse coupled neural network based method [45], zhang’s method [10], ASR method [46] and GFCE method [47]

Read more

Summary

Introduction

With the development of the modern technology, the requirement for the completeness of the information acquisition is increasing, so the multi-sensors play an important role in many fields. To better obtain the composite image for further visual and processing tasks, the image fusion has become a research hotspot and been widely employed in computer vision, military surveillance, medical imaging, remote sensing, and so on [1]–[5]. Current image fusion methods are mainly divided into spatial domain based and transform domain based according to their processing domain [6], [7]. The fused image can be constructed through the combination of the input images at pixel-level or block-level. These methods mainly select salient pixels or regions with higher clarity to fuse the multi-modal images [8]. The direct fusion in pixellevel will lead to decreasing the edge contrast, and the region

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.