Abstract

The nonoverlapped spectrum range between low spatial resolution (LR) hyperspectral (HS) and high spatial resolution (HR) multispectral (MS) images has been a fundamental but challenging problem for MS/HS fusion. The spectrum of HS data is generally 400–2500 nm, and the spectrum of MS data is generally 400–900 nm; how to obtain the high-fidelity HR HS fused image within the whole spectrum of 400–2500 nm? In this article, we proposed a band divide-and-conquer framework (BDCF) to solve the problem, by comprehensively considering spectral fidelity, spatial enhancement, and computational efficiency. First, the spectral bands of HS were divided into overlapped and nonoverlapped bands according to the spectral response between HS and MS. Then, a novel improved component substitution (CS)-based method by combing neural network was proposed to fuse the overlapped bands of LR HS. Then, a mapping-based method with the neural network was presented to construct the complicated nonlinear relationship between overlapped and nonoverlapped bands of the original LR HS data. The trained network was mapped to the fused overlapped HR HS bands to estimate the nonoverlapped HR HS bands. Experimental results on two simulated data sets and two realistic data sets of Gaofen (GF)-5 LR HS, GF-1 MS, and Sentinel-2A MS show that the proposed BDCF has superior performance in both high spectral fidelity and sharp spatial details, and it obtained competitive fusion behaviors compared with other state-of-the-art methods. Moreover, BDCF has relatively higher computational efficiency than optimal solution-based methods and deep learning-based fusion methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call