Abstract

Multiple support vector machines (SVMs) with random subspaces [1]-[5] have been performing excellently for hyperspectral image classification to reduce the correlation between features and avoid the Hughes phenomena. In most random subspace methods, features were randomly selected without replacement from the original feature set according to uniform distribution [6]. However, in general, SVM with a Gaussian radial basis function (RBF) kernel is a nonlinear classifier [7]-[8]. It means that if the corresponding feature subset has the largest nonlinear separability with a RBF kernel, then the corresponding SVM can have a better classification performance. Hence, in this study, feature subsets are randomly selected without replacement from a kernel (nonlinear) feature importance [9] determined by the largest nonlinear between-class separability and the smallest nonlinear within-class separability with respect to the RBF kernel. The results from the experiments showed that the proposed method can improve the classification performance using only a few features. In addition, the rate of classification accuracy is higher than those based on the feature subsets determined by the descending order of feature importance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call