Abstract

To generate a high-spatial-resolution hyperspectral (HHS) image from a high-spatial-resolution multispectral (HMS) image, both spatial information and spectral information should be considered simultaneously if we want to build a more accurate mapping from HMS to HHS. To this end, a spectral and spatial jointed spectral super-resolution method is proposed in this letter using an end-to-end learning strategy for each subspace with the cluster-based multibranch backpropagation neural network (BPNN). More specifically, in addition to the spectra similarity, a modified superpixel segmentation is introduced to jointly take spatial contextual information into account, and a new framework with it is given. Comparisons on the Columbia University Automated Vision Environment (CAVE) data set show that our proposed method outperforms other relative state-of-the-art methods more than 0.3 in the root mean squared error (RMSE) and more than 1.0 in the spectral angle mapper (SAM) index. Especially, an exemplary application is demonstrated using the synchronized observation data collected by the multispectral and hyperspectral sensors mounted on the HJ-1A satellite at the same time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call