Abstract

In hyperspectral image (HSI) classification, the training set often contains a very limited number of high-dimensional samples, which can cause overfitting problems, especially in deep learning (DL) frameworks. This situation worsens when a bias exists between the feature distributions of the training and testing sets. In this article, we propose a novel method, referred to as adversarial prototype learning (APL), for learning an accurate HSI classification model in a uniform manner when the training set contains few, high-dimensional, and biased samples. APL consists of a prototype learning module (PLM) and an adversarial alignment module (AAM). The PLM aims to alleviate overfitting by training prototypical classifiers with a simple inductive bias in the initial feature space. The AAM aims to reduce the bias between the feature distributions of the training and testing sets using two adversarial prototypical classifiers learned by the PLM. Iteratively training the PLM and AAM results in alignment of the feature distributions between the training and testing sets while improving the generalization ability of the prototypical classifiers. The theoretical analysis indicates that APL is able to lower the upper error bound when classifying testing samples. We further apply APL in a DL framework to establish the adversarial prototypical network (APNet) architecture. Experimental results on four publicly available HSI datasets demonstrate that the proposed APNet alleviates overfitting, aligns the feature distributions between the training and testing sets, and achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call