Abstract

This study combines two different learning paradigms, k-nearest neighbor (k-NN) rule, as memory-based learning paradigm and relevance vector machines (RVM), as statistical learning paradigm. The purpose is to improve the performance of k-NN rule through selection of important features with sparse Bayesian learning method. This combination is performed in kernel space and is called k-relevance vector (k-RV). The proposed model significantly prunes irrelevant features. Our combination of k-NN and RVM presents a new concept of similarity measurement for k-NN rule, we call it k-relevancy which aims to consider “relevancy” in the feature space beside “nearness” in the input space. We also introduce a new parameter, responsible for early stopping of iterations in RVM that is able to improve the classification accuracy. Intensive experiments are conducted on several classification datasets from University of California Irvine (UCI) repository and two real datasets from computer vision domain. The performance of k-RV is highly competitive compared to a few state-of-the-arts in terms of classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call