Abstract

Most CNN models exhibit two major flaws in hyper-spectral image (HSI) restoration tasks. First, limited high-dimensional HSI training examples exacerbate the difficulty of deep learning methods in learning effective spatial and spectral representations. Second, the existing CNN-based methods model local relations and present limitations in capturing long-range dependencies. In this paper, we customize a novel dual-stream Transformer (DSTrans) for HSI restoration, which mainly consists of the dual-stream attention and the dual-stream feed-forward network. Specifically, we develop the dual-stream attention consisting of Multi-Dconv-head spectral attention (MDSA) and Multi-head Spatial self-attention (MSSA). MDSA and MSSA respectively calculate self-attention along the spectral and spatial dimensions in local windows to capture long-range spectrum dependencies and model global spatial interactions. Meanwhile, the dual-stream feed-forward network is developed to extract global signals and local details in parallel branches. In addition, we exploit a multi-tasking network to train the auxiliary RGB image (RGBI) task and HSI task jointly so that both numerous RGBI samples and limited HSI samples are exploited to learn parameter distribution for DSTrans. Extensive experimental results demonstrate that our method achieves state-of-the-art results on HSI restoration tasks, including HSI super-resolution and denoising. The source code can be obtained at: https://github.com/yudadabing/Dual-Stream-Transformer-for-Hyperspectral-Image-Restoration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call