Abstract

The rapid development of remote sensing and space technology provides multisource remote sensing image data for earth observation in the same area. Information provided by these images, however, is often complementary and cooperative, and multisource image fusion is still challenging. This paper proposes a novel multisource remote sensing image fusion algorithm. It integrates the contrast saliency map (CSM) and the sum-modified-Laplacian (SML) in the nonsubsampled shearlet transform (NSST) domain. The NSST is utilized to decompose the source images into low-frequency sub-bands and high-frequency sub-bands. Low-frequency sub-bands reflect the contrast and brightness of the source images, while high-frequency sub-bands reflect the texture and details of the source images. Using this information, the contrast saliency map and SML fusion rules are introduced into the corresponding sub-bands. Finally, the inverse NSST reconstructs the fusion image. Experimental results demonstrate that the proposed multisource remote image fusion technique performs well in terms of contrast enhancement and detail preservation.

Highlights

  • Remote sensing images play an important role in urban planning, environmental monitoring, and military defense [1]

  • Many image fusion methods have been proposed in recent decades; image fusion algorithms based on transform domain and edge-preserving filters are widely used [4]

  • The combination of guided image filtering and other transform domain algorithms such as dual-tree complex wavelet transform (DTCWT), nonsubsampled contourlet transform (NSCT), and nonsubsampled shearlet transform (NSST) is introduced into the field of image fusion, and good results are achieved

Read more

Summary

Introduction

Remote sensing images play an important role in urban planning, environmental monitoring, and military defense [1]. Many image fusion methods have been proposed in recent decades; image fusion algorithms based on transform domain and edge-preserving filters are widely used [4]. Wang et al [14] proposed a multispectral (MS) and panchromatic (PAN) image fusion technique based on the hidden Markov tree model in a complex tight framelet transform domain to improve the spatial resolution of the MS image while keeping the spectral information. Yang et al [15] proposed a remote sensing image fusion algorithm via a contourlet hidden Markov tree and a clarity–saliency-driven pulse couple neural network (PCNN) model to enhance the edges and contours of fused remote sensing images. The combination of guided image filtering and other transform domain algorithms such as DTCWT, NSCT, and NSST is introduced into the field of image fusion, and good results are achieved.

Related Works Nonsubsampled Shearlet Transform
Fusion of High-Frequency Components
Experimental Results and Discussion
Qualitative Analysis
Quantitative Analysis
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call