Abstract

Hyperspectral images (HSIs) have been applied to a wide range of areas thanks to their high spectral resolutions. However, spatial resolutions of HSIs are inevitably compromised due to hardware limits. While HSIs can be enhanced by the super resolution techniques, there may be still spectral distortions and spatial blurriness in the reconstructed high-resolution HSIs (HR-HSIs). Therefore, this paper proposes a Parallel Framework based on Modality-Adaptive Wavelet Networks (PF-MAWN) to reconstruct HR-HSISs by effectively fusing low-resolution HSIs (LR-HSIs) and high-resolution multispectral images (HR-MSIs). To avoid spectral artifacts, multiple MAWNs work in parallel and fuse HR-MSIs/LR-HSIs in groups of highly correlated adjacent bands. Each MAWN incorporates wavelet transform (WT) deeply into convolutional neural networks (CNN) by layers of Multidomain Preprocessing Blocks (MPBs), Multimodal Adaptation Blocks (MABs) and Multilevel Connection Blocks (MCBs). Within each MAB, there are two Feature Alignment Modules (FAMs) based on self-attention to radiometrically match HR-MSIs/LR-HSIs and an Information Control Module (ICM) based on Gaussian-weighted attention to control textural details in HR-HSIs. The framework exhibits good extensibility and generalizability on spectral images of arbitrary bands and diversified scales. Experimental results show that PF-MAWN can reconstruct high-quality HR-HSIs with different sizes of objects, ranging from microscopic to macroscopic. It outperforms several state-of-the-art methods on four public datasets of histopathology tissues, everyday stuff and remote sensing objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call