Abstract

The number of nearest neighbors K and the utilized distance measure considerably impact the performance of the K-nearest neighbor (K-NN) algorithm. The information provided by neighbors may be perplexing due to the set presumption of K for every test sample without any prior knowledge, which might result in incorrect classification results. The hypothesis of locally constant class conditional probabilities relied on by K-NN, is violated by high dimensions, which also bring out the dimensionality curse. Notably, in the case of imperfect data, training samples may be noise-corrupted or located in substantially overlapping regions, thus impairing the effectiveness of the K-NN rule. In contrast to existing research, which frequently focuses on just one of the problems above, this paper presents an adaptive evidential K-NN classification (AEK-NN) algorithm that synchronizes neighborhood search with feature weighting. In order to jointly learn the adaptive neighborhood for each sample and various feature weights, AEK-NN maps the Euclidean space into a rebuilt space consisting of similarity coefficients through a two-stage training process. Within the evidence theory framework, AEK-NN produces classification results for data with imperfect labels. The advantages of applying evidence theory, incorporating adaptive neighborhood search, and feature weighting are shown through ablation studies. We also use simulated and real-world datasets to demonstrate the state-of-the-art performance of AEK-NN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call