Abstract

With the rapid development of remote sensing platforms, sensors and other technologies, remote sensing, as an important technical means of collecting information on land cover and its changes, plays an important role in land cover classification and dynamic monitoring. Due to the limitation of the imaging mechanism of in-orbit optical remote sensing sensors, it is difficult for to take into account both spatial resolution and spectral resolution. There is a need for complementary use of different sensor images and to improve the accuracy of feature interpretation. In this study, based on the Gram-Schmidt Adaptive (GSA) and its classical extension, a neural network-based multi-source remote sensing image fusion method is proposed by combining the U-Net coding and decoding structure with the Non-Local spatial attention mechanism. Using two sets of experimental datasets, three kinds of fusion evaluation metrics without reference and three kinds of fusion evaluation metrics with reference, the improved method and the existing fusion method are compared.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call