Abstract

Fusion of the high-spatial-resolution hyperspectral (HHS) image using low-spatial- resolution hyperspectral (LHS) and high-spatial-resolution multispectral (HMS) image is usually formulated as a spatial super-resolution problem of LHS image with the help of an HMS image, and that may result in the loss of detailed structural information. Facing the above problem, the fusion of HMS with LHS image is formulated as a nonlinear spectral mapping from an HMS to HHS image with the help of an LHS image, and a novel cluster-based fusion method using multi-branch BP neural networks (named CF-BPNNs) is proposed, to ensure a more reasonable spectral mapping for each cluster. In the training stage, considering the intrinsic characteristics that the spectra are more similar within each cluster than that between clusters and so do the corresponding spectral mapping, an unsupervised clustering is used to divide the spectra of the down-sampled HMS image (marked as LMS) into several clusters according to spectral correlation. Then, the spectrum-pairs from the clustered LMS image and the corresponding LHS image are used to train multi-branch BP neural networks (BPNNs), to establish the nonlinear spectral mapping for each cluster. In the fusion stage, a supervised clustering is used to group the spectra of HMS image into the clusters determined during the training stage, and the final HHS image is reconstructed from the clustered HMS image using the trained multi-branch BPNNs accordingly. Comparison results with the related state-of-the-art methods demonstrate that our proposed method achieves a better fusion quality both in spatial and spectral domains.

Highlights

  • Hyperspectral (HS) images can provide abundant spectral and spatial information simultaneously, and have been widely used in various fields

  • Theoovverearlalllfrfarmameweworokrkofoof uorupr rporpoopsoesdedmmetheothdoids isshsohwonwinn iFnigFuigreur4e. 4A.sAcsancabnebseesneein iFnigFuigreur4e, t4h,ethe spastipaaltinalfoinrfmoramtioatnioins pisrporvoivdieddeddidriercetcltylybbyyththeeHHMMSSiimmage, annddtthheessppeecctrtaral lininfofromrmataiotinonis ipsrpovroidveided by the low-spatial-resolution hyperspectral (LHS) and low-spatial-resolution multispectral (LMS) image through spectral mapping, which is represented by multi-branch BP neural networks (BPNNs)

  • To evaluate the fusion performance of our proposed CF-BPNNs method, the related state-of-the-art methods for LHS and high-spatial-resolution multispectral (HMS) image fusion, such as coupled nonnegative matrix factorization (CNMF) method [13], the generalization of simultaneous orthogonal matching pursuit (G-SOMP+) method [1], the hyperspectral super-resolution (Hysure) method [14], the fast fusion based on sylvester equation (FUSE) method [11], the collaborative representation using local adaptive dictionary pair (LACRF) method [8], the non-factorization sparse representation and error matrix estimation (NFSREE) method [15] and the most new 3-D convolutional neural network (3D-CNN) method [20] are used as comparisons

Read more

Summary

Introduction

Hyperspectral (HS) images can provide abundant spectral and spatial information simultaneously, and have been widely used in various fields. The increasing of spectral bands in the HS imaging process results in the limitation on the spatial resolution [1]. Multispectral (MS) images, which only have a few spectral bands, generally have much higher spatial resolution, when compared to HS images. A more useful high-spatial-resolution hyperspectral (HHS) image can be obtained, if we can fuse a low-spatial-resolution hyperspectral (LHS) image with a high-spatial-resolution multispectral (HMS) image over the same scene. A number of methods have been proposed for HS and MS image fusion, which can be mainly divided into four groups: Pansharpening based, Bayesian based, dictionary learning based and neural network based methods. Pansharpening-based methods have been proposed to fuse a high-spatial-resolution panchromatic image with a low-spatial-resolution multispectral (LMS)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.