Abstract

A significant challenge in the hyperspectral data classification is the limited number of available training samples. Spatial-spectral methods approach this problem by employing two distinct views of the data (spatial and spectral) and assuming local pixel similarity and label continuity. Following this idea, we propose a sample selection method that exploits the diversity between the spectral and spatial information to extend the set of training points. New seeds, denoted as “borderline candidates,” are derived from the disagreement between the support vector machine and the Markov random field classifiers and are verified by a spatial neighborhood voting to reduce the label noise. We show how taking advantage of the learners’ diversity (instead of their consensus) improves the classification result. The proposed method is tested with several classification algorithms and provides reliable and useful extension of the training set, allowing them to find better class models and improve the classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call