Abstract

ABSTRACT Hyperspectral remote sensing images (HSI) are characterized as rich spectral information with low spatial resolution, and a cost-effective way for the spatial information supplement is the fusion to multispectral remote sensing images (MSI). This study proposed a detail injection network for HSI and MSI fusion based on multiscale and global contextual features (MGDIN), which extracts features of different scales using residual multi-scale convolution, and captures the contextual information and long-range dependencies via global contextual block. MGDIN improves the spatial and spectral qualities of fused images by minimizing a new loss function that considers content, spectral and edge losses. Experiments on five publicly available datasets (including Botswana, Pavia Centre, Pavia University, Washington DC Mall and Houston) show that MGDIN outperforms the popular algorithms in terms of fusion quality and learning ability. The new loss function is also better than the popular loss functions in the experiments on the Botswana data set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call