Current deep learning methods for single hyperspectral image super-resolution are non-blind ones, which adopt the simplistic bicubic degradation model. These models have poor generalization performance and cannot handle unknown degradation. Additionally, blind super-resolution methods for RGB images neglect rich spectral information in hyperspectral images, which causes spectral distortion in results. To address these issues, we consider the degeneration estimation and propose a single hyperspectral image blind super-resolution algorithm. Specifically, we first use a blur kernel estimation network and a deblurring network to obtain the image without blur. We change the receptive field by exchanging spatial channel information when estimating the kernel, thereby obtaining blurry information at different scales. In addition, to reduce the errors introduced by kernel mismatch and the relevant information among bands ignored by the group convolution, we fuse the spatial information of the blurred image to compensate for the errors and then utilize the spatial and spectral information of other bands to guide the feature extraction of the current band. Finally, we integrate the feature information from different branches in a global fusion network to improve spatial and spectral information fidelity further. Extensive experiments on synthetic and real-world hyperspectral datasets unequivocally demonstrate that our approach outperforms other state-of-the-art approaches in terms of both quantitative and qualitative evaluations. Our code is publicly available at https://github.com/YoungP2001/DBSR.
Read full abstract