Abstract

Recently, nonlinear feature extraction algorithms based on a so-called kernel trick have appeared to reduce the limitations of linear feature extraction methods with respect to class discrimination. This study presents a new kernel function that integrates the discriminative information from class labels and spatial contexts into the basic radial basis function (RBF). We represent the mutual closeness of samples in terms of the average class membership probability and explore contextual information by means of Markov random field models. By fusing additional discriminative information into the kernel feature space, the proposed kernel function outperforms the basic RBF kernel function. A more compact set of features have shown equivalent effectiveness. Experiments also demonstrate that using spatial contextual information during feature extraction can be more efficient than using the information during the classification stage.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call