Abstract

Remote sensing image fusion is considered a cost effective method for handling the tradeoff between the spatial, temporal and spectral resolutions of current satellite systems. However, most current fusion methods concentrate on fusing images in two domains among the spatial, temporal and spectral domains, and a few efforts have been made to comprehensively explore the relationships of spatio-temporal–spectral features. In this study, we propose a novel integrated spatio-temporal–spectral fusion framework based on semicoupled sparse tensor factorization to generate synthesized frequent high-spectral and high-spatial resolution images by blending multisource observations. Specifically, the proposed method regards the desired high spatio-temporal–spectral resolution images as a four-dimensional tensor and formulates the integrated fusion problem as the estimation of the core tensor and the dictionary along each mode. The high-spectral correlation across the spectral domain and the high self-similarity (redundancy) features in the spatial and temporal domains are jointly exploited using the low dimensional and sparse core tensors. In addition, assuming that the sparse coefficients in the core tensors across the observed and desired image spaces are not strictly the same, we formulate the estimation of the core tensor and the dictionaries as a semicoupled sparse tensor factorization of available heterogeneous spatial, spectral and temporal remote sensing observations. Finally, the proposed method can exploit the multicomplementary spatial, temporal and spectral information of any combination of remote sensing data based on this single unified model. Experiments on multiple data types, including spatio-spectral, spatio-temporal, and spatio-temporal–spectral data fusion, demonstrate the effectiveness and efficiency of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call