Abstract

Fusion of hyperspectral images with low-spatial and high-spectral resolution and multispectral images with high-spatial and low-spectral resolution is an important method to improve spatial resolution. Existing deep learning-based image fusion technologies usually neglect the ability of neural networks to understand differential features. In addition, the loss constraints do not stem from the physical characteristics of the hyperspectral imaging sensors. We propose the self-supervised loss and the spatially and spectrally separable loss, respectively. 1) The self-supervised loss: Different from the previous way of directly stacking the upsampled hyperspectral images and multispectral images as input, we expect the potentially processed hyperspectral images to ensure not only the integrity of hyperspectral image information, but also the most reasonable balance between overall spatial and spectral features. Firstly, the pre-interpolated hyperspectral images are decomposed into subspaces as self-supervised labels. Then, a network is designed to learn subspace information and obtain the most discriminative features. 2) The separable loss: According to the physical characteristics of hyperspectral images, the pixel-based mean square error loss is first divided into the domain loss and spectral domain loss, and then the similarity score of the images is calculated and used to construct the weighting coefficients of the two domain losses. At last, the separable loss is jointly expressed by the weights. Experiments on public benchmark datasets indicate that the self-supervised loss and separable loss can improve fusion performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call