Abstract

In this paper, a remote sensing image fusion method is presented since sparse representation (SR) has been widely used in image processing, especially for image fusion. Firstly, we used source images to learn the adaptive dictionary, and sparse coefficients were obtained by sparsely coding the source images with the adaptive dictionary. Then, with the help of improved hyperbolic tangent function (tanh) and l 0 − max , we fused these sparse coefficients together. The initial fused image can be obtained by the image fusion method based on SR. To take full advantage of the spatial information of the source images, the fused image based on the spatial domain (SF) was obtained at the same time. Lastly, the final fused image could be reconstructed by guided filtering of the fused image based on SR and SF. Experimental results show that the proposed method outperforms some state-of-the-art methods on visual and quantitative evaluations.

Highlights

  • By making full use of the complementary information of the remote sensing images and other source images of the same scene, image fusion can be defined as the processing method for integrating this information together to obtain a fused image, which is more suitable for the human visual system [1]

  • Most researchers conduct image fusion based on pixel-fusion [6,7], such as the image fusion method based on the spatial domain, and the image fusion method based on the transform domain

  • Due to the good performance of sparse representation and the rich information in the spatial domain, this paper presents one new remote sensing image fusion method based on sparse representation and guided filtering

Read more

Summary

Introduction

By making full use of the complementary information of the remote sensing images and other source images of the same scene, image fusion can be defined as the processing method for integrating this information together to obtain a fused image, which is more suitable for the human visual system [1]. The rapidly developing sparse representation methods can more sparsely represent the source images, and effectively extract the potential information hidden in the source images and produce more accurate fused images, compared with the multi-scale transforms [12,13,14]. Based on these findings, scholars apply sparse representation to image fusion. (3) The image fusion methods based on SR can obtain the fused image by sparsely coding the source images and fusing the sparse coefficients It ignores the correlation of the image information in the spatial domain and loses some important detailed information of the source images.

Sparse Representation
Adaptive Dictionary Learning
Fusion tanhfusion and l0 rules
Images
The Proposed Image Fusion Method
The proposed
The Experiments and Result Analysis
Objective Valuation Indexes
Large Scale Image Fusion of Optical and Radar Images
Image Fusion of Remote Sensing Images
Group in Figure
And the contrast withGroup
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.