Abstract

It was previously proved by Bailey and Jain that the asymptotic classification error rate of the (unweighted) k-nearest neighbor (k-NN) rule is lower than that of any weighted k-NN rule. Equations are developed for the classification error rate of a test sample when the number of training samples is finite, and it is argued intuitively that a weighted rule may then in some cases achieve a lower error rate than the unweighted rule. This conclusion is confirmed by analytically solving a particular simple problem, and as an illustration, experimental results are presented that were obtained using a generalized form of a weighting function proposed by Dudani.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call