Abstract

Hyperspectral image (HSI) classification has been marked by exceptional progress in recent years. Much of this progess has come from advances in convolutional neural networks (CNNs). Different from the RGB images, HSI images are captured by various remote sensors with different spectral configurations. Moreover, each HSI dataset only contains very limited training samples and thus the model is prone to overfitting when using deep CNNs. In this paper, we first propose a 3D asymmetric inception network, AINet, to overcome the overfitting problem. With the emphasis on spectral signatures over spatial contexts of HSI data, the 3D convolution layer of AINet is replaced with two asymmetric inception units, i.e., a space inception unit and spectrum inception unit, to convey and classify the features effectively. In addition, we exploited a data-fusion transfer learning strategy to improve model initialization and classification performance. Extensive experiments show that the proposed approach outperforms all of the state-of-the-art methods via several HSI benchmarks, including Pavia University, Indian Pines and Kennedy Space Center (KSC).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call