Although remote sensing sensors acquire hyperspectral images (HSI) with outstanding spectral resolution, hardware limitations prevent them from obtaining HSI with superior spatial resolution. Having high-resolution multispectral images (MSI) from the same scene has sparked the concept of HSI-MSI fusion to achieve high-resolution HSI. Recently, deep learning algorithms have shown excellent performance in HSI-MSI fusion. Deep learning algorithms can automatically extract the priors latent in the input data and reconstruct the output image. However, their performance depends on the training data’s inherent characteristics. HSI volume is generally significantly larger than MSI volume. While the high-resolution MSI provides richer spatial information, its contribution to fusion is lower due to its smaller volume. To address this volume discrepancy, a deep convolutional neural network named BASFE is proposed. This network employs a two-branch structure that balances the extracted spatio-spectral features from both HSI and MSI input images, enabling effective feature fusion. Therefore, better efficiency can be achieved by using simpler structures. Experiments showed that balancing the data improves the model’s performance significantly. Furthermore, to improve efficiency and reduce complexity and computational burden, the spatio-spectral feature extraction residual modules are embedded in a dense architecture to jointly extract these features. Comprehensive experiments on five challenging datasets show that BASFE surpasses state-of-the-art algorithms. Compared to the first- and second-best competing methods, the proposed method achieves an average improvement of 1.58 dB and 3.68 dB in peak signal-to-noise ratio (PSNR) across the five studied datasets. The source code of BASFE is available in https://github.com/rajaei-arash/BASFE_hyperspectral_fusion.git.
Read full abstract