Abstract

The fusion of hyperspectral (HS) and panchromatic (PAN) images aims to generate a fused HS image that combines spectral information of the HS image with spatial information of the PAN image. In this article, we propose a multiresolution spatial–spectral feature learning (MSSL) framework for fusing HS and PAN images. The proposed MSSL transforms the existing deep and complex network into several simple and shallow subnetworks to simplify the feature learning process. MSSL upsamples the HS image while downsamples the PAN image and designs multiresolution 3-D convolutional autoencoder (CAEs) networks with a spectral constraint to learn complete spatial–spectral features of the HS image. MSSL designs multiresolution 2-D CAEs with spatial constraint to extract spatial features of the PAN image, with a low computational cost. In order to effectively generate the pansharpened HS image with high spatial and spectral fidelity, a multiresolution residual network is presented to reconstruct the HS image from the extracted spatial–spectral features. Extensive experiments are conducted on three widely used remote sensing data sets in comparison with state-of-the-art HS image fusion methods, demonstrating the superiority of the proposed MSSL method. Code is available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/Jiahuiqu/MSSL</uri> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call