Abstract

Deep learning has attracted much attention in the field of hyperspectral image classification recently, due to its powerful representation and generalization abilities. Most of current deep learning models are trained in a supervised manner, which require large amounts of labeled samples to achieve state-of-the-art performance. Unfortunately, pixel-level labeling in hyperspectral imageries is difficult, time-consuming, and human-dependent. To address this issue, we propose an unsupervised feature learning model using multi-modal data, hyperspectral and LiDAR in particular. It takes advantage of the relationship between hyperspectral and LiDAR data to extract features, without using any label information. After that, we design a dual fine-tuning strategy to transfer the extracted features for hyperspectral image classification with small numbers of training samples. Such strategy is able to explore not only the semantic information but also the intrinsic structure information of training samples. In order to test the performance of our proposed model, we conduct comprehensive experiments on three hyperspectral and LiDAR datasets. Experimental results show that our proposed model can achieve better performance than several state-of-the-art deep learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call