Abstract

Fusion of remote sensing images with different spatial and temporal resolutions is highly needed by diverse earth observation applications. A small number of spatiotemporal fusion methods using sparse representation appear to be more promising than traditional linear mixture methods in reflecting abruptly changing terrestrial content. However, one of the main difficulties is that the results of sparse representation have reduced expressional accuracy; this is due in part to insufficient prior knowledge. For remote sensing images, the cluster and joint structural sparsity of the sparse coefficients could be employed as a priori knowledge. In this paper, a new optimization model is constructed with the semi-coupled dictionary learning and structural sparsity to predict the unknown high-resolution image from known images. Specifically, the intra-block correlation and cluster-structured sparsity are considered for single-channel reconstruction, and the inter-band similarity of joint-structured sparsity is considered for multichannel reconstruction, and both are implemented with block sparse Bayesian learning. The detailed optimization steps are given iteratively. In the experimental procedure, the red, green, and near-infrared bands of Landsat-7 and Moderate Resolution Imaging Spectrometer (MODIS) satellites are put to fusion with root mean square errors to check the prediction accuracy. It can be concluded from the experiment that the proposed methods can produce higher quality than state-of-the-art methods.

Highlights

  • Multi-sensor fusion has always been concerned for complementary information enhancement [1], especially for the remote sensing big data era [2,3,4,5]

  • We can use the image of a moment as the reference image and predict the change of the high spatial resolution image in the moment by matching the neighboring similar pixels, such as spatial and temporal adaptive reflectance fusion model (STARFM) [6], enhanced STARFM (ESTARFM) [7], and spatial temporal adaptive algorithm for mapping reflectance change (STAARCH) [8]

  • We proposed a fusion method to integrate remote sensing images of different spatial and temporal resolutions to construct new images of high spatial and temporal resolutions

Read more

Summary

Introduction

Multi-sensor fusion has always been concerned for complementary information enhancement [1], especially for the remote sensing big data era [2,3,4,5]. Different types of remote sensing satellites are used to monitor the Earth’s surface. This has resulted in a wide range of sensors and observation schemes to obtain and analyze ground status. The physical constraint between temporal and spatial resolutions brings additional cost to providing the available data with high temporal and spatial resolutions concurrently. This limits the response speed and processing accuracy of quantitative remote sensing.

Methods
Findings
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.