Deep learning is an important research topic in the field of image super-resolution. Problematically, the performance of existing hyperspectral image super-resolution networks is limited by feature learning for hyperspectral images. Nevertheless, the current algorithms exhibit some limitations in extracting diverse features. In this paper, we address limitations to existing hyperspectral image super-resolution networks, focusing on feature learning challenges. We introduce the Channel-Attention-Based Spatial–Spectral Feature Extraction network (CSSFENet) to enhance hyperspectral image feature diversity and optimize network loss functions. Our contributions include: (a) a convolutional neural network super-resolution algorithm incorporating diverse feature extraction to enhance the network’s diversity feature learning by elevating the matrix rank, (b) a three-dimensional (3D) feature extraction convolution module, the Channel-Attention-Based Spatial–Spectral Feature Extraction Module (CSSFEM), to boost the network’s performance in both the spatial and spectral domains, (c) a feature diversity loss function designed based on the image matrix’s singular value to maximize element independence, and (d) a spatial–spectral gradient loss function introduced based on space and spectrum gradient values to enhance the reconstructed image’s spatial–spectral smoothness. In contrast to existing hyperspectral super-resolution algorithms, we used four evaluation indexes, PSNR, mPSNR, SSIM, and SAM, and our method showed superiority during testing with three common hyperspectral datasets.
Read full abstract