Abstract

Spatial resolution is a crucial indicator for measuring the quality of hyperspectral imaging (HSI) and obtaining high-resolution (HR) hyperspectral images without any auxiliary information has become increasingly challenging. One promising approach is to use deep-learning (DL) techniques to reconstruct HR hyperspectral images from low-resolution (LR) images, namely super-resolution (SR). While convolutional neural networks are commonly used for hyperspectral image SR (HSI-SR), they often lead to unavoidable performance degradation due to the lack of long-range dependence learning ability. In this article, we propose a dual self-attention Swin transformer SR (DSSTSR) network that utilizes the ability of the shifted windows (Swin) transformer in the spatial representation of both global and local features and learns spectral sequence information from adjacent bands of HSI. Additionally, DSSTSR incorporates an image denoising module using the wavelet transformation method to mitigate the impact of stripe noise on HSI-SR. Our extensive experiments using publicly close-range datasets demonstrate that DSSTSR outperforms other state-of-art HSI-SR methods in terms of three image quality metrics. Furthermore, we applied DSSTSR to the SR of satellite hyperspectral images and achieved improved classification results. Compared to its competitors, DSSTSR exhibits superior performance in enhancing spatial resolution while preserving spectral information. These results suggest that the DSSTSR network has great potential for standardization in remote-sensing image processing and practical applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call