Abstract

Recent works have shown that combining several classifiers is an effective method to improve classification accuracy. Many ensemble approaches have been introduced such as bagging and boosting that have reduced the generalization error of different classifiers; however, these methods could not increase the performance of Nearest Neighbor (NN) classifier. In this paper, a novel weighted ensemble technique (WNNE) is presented for improving the performance of NN classifier. In fact, WNNE is a combination of several NN classifiers, which have different subsets of input feature set. The algorithm assigns a weight to each classifier, and uses a weighted vote mechanism among these classifiers to determine the output of ensemble. We evaluated the proposed method on several datasets from UCI Repository and compared with NN classifier and Random subspace method (RSM). The results show that our method outperforms these two approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call