Abstract

The land cover classification has been an important task in remote sensing. With the development of various sensors technologies, carrying out classification work with multisource remote sensing (MSRS) data has shown an advantage over using a single type of data. Hyperspectral images (HSIs) are able to represent the spectral properties of land cover, which is quite common for land cover understanding. Light detection and ranging (LiDAR) images contain altitude information of the ground, which is greatly helpful with urban scene analysis. Current HSI and LiDAR fusion methods perform feature extraction and feature fusion separately, which cannot well exploit the correlation of data sources. In order to make full use of the correlation of multisource data, an unsupervised feature extraction-fusion network for HSI and LiDAR, which utilizes feature fusion to guide the feature extraction procedure, is proposed in this article. More specifically, the network takes multisource data as input and directly output the unified fused feature. A multimodal graph is constructed for feature fusion, and graph-based loss functions including Laplacian loss and t-distributed stochastic neighbor embedding (t-SNE) loss are utilized to constrain the feature extraction network. Experimental results on several data sets demonstrate the proposed network can achieve more excellent classification performance than some state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call