Abstract

Image fusion combines several images of the same scene into a fused image, which contains all important information. Multiscale transform and sparse representation can solve this problem effectively. However, due to the limited number of dictionary atoms, it is difficult to provide an accurate description for image details in the sparse representation–based image fusion method, and it needs a great deal of calculations. In addition, for the multiscale transform–based method, the low-pass subband coefficients are so hard to represent sparsely that they cannot extract significant features from images. In this paper, a nonsubsampled contourlet transform (NSCT) and sparse representation–based image fusion method (NSCTSR) is proposed. NSCT is used to perform a multiscale decomposition of source images to express the details of images, and we present a dictionary learning scheme in NSCT domain, based on which we can represent low-frequency information of the image sparsely in order to extract the salient features of images. Furthermore, it can reduce the calculation cost of the fusion algorithm with sparse representation by the way of nonoverlapping blocking. The experimental results show that the proposed method outperforms both the fusion method based on single sparse representation and multiscale decompositon.

Highlights

  • Image fusion is a process of combining several source images that are captured by multiple sensors or by a single sensor at different times

  • From the left side, it can be seen that SR, simultaneous orthogonal matching pursuit (SOMP), and joint sparse representation (JSR)-based methods have much clearer skeletal features than stationary wavelet transform (SWT), nonsubsampled contourlet transform (NSCT), and LPSSIM fused images, due to the sparse representation, which can extract the salient features of source images

  • Second is the method of optimal directions for joint sparse representation-based image fusion (MODJSR) fused image, which loses only some soft tissue details as can be seen in the left image in Fig. 8(h), while the details are important for diagnosing

Read more

Summary

Introduction

Image fusion is a process of combining several source images that are captured by multiple sensors or by a single sensor at different times. Those source images contain more comprehensive and accurate information than a single image. Most of these methods can be classified into two categories: multiscale transform and sparse representation– based approach. The basic idea of multiscale transform– based fusion method is that the salient information of images is closely related to the multiscale decomposition coefficient

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.