Abstract

k-nearest neighbors (k-NN), which is known to be a simple and efficient approach, is a non-parametric supervised classifier. It aims to determine the class label of an unknown sample by its k-nearest neighbors that are stored in a training set. The k-nearest neighbors are determined based on some distance functions. Although k-NN produces successful results, there have been some extensions for improving its precision. The neutrosophic set (NS) defines three memberships namely T, I and F. T, I, and F shows the truth membership degree, the false membership degree, and the indeterminacy membership degree, respectively. In this paper, the NS memberships are adopted to improve the classification performance of the k-NN classifier. A new straightforward k-NN approach is proposed based on NS theory. It calculates the NS memberships based on a supervised neutrosophic c-means (NCM) algorithm. A final belonging membership U is calculated from the NS triples as U = T + I − F . A similar final voting scheme as given in fuzzy k-NN is considered for class label determination. Extensive experiments are conducted to evaluate the proposed method’s performance. To this end, several toy and real-world datasets are used. We further compare the proposed method with k-NN, fuzzy k-NN, and two weighted k-NN schemes. The results are encouraging and the improvement is obvious.

Highlights

  • The k-nearest neighbors (k-NN), which is known to be the oldest and simplest approach, is a non-parametric supervised classifier [1,2]. It aims to determine the class label of an unknown sample by its k-nearest neighbors that are stored in a training set

  • We adopted the neutrosophic set (NS) memberships to improve the classification performance of the k-NN classifier

  • The paper is organized as follows: we briefly reviewed the theories of k-NN

Read more

Summary

Introduction

The k-nearest neighbors (k-NN), which is known to be the oldest and simplest approach, is a non-parametric supervised classifier [1,2] It aims to determine the class label of an unknown sample by its k-nearest neighbors that are stored in a training set. Authors proposed three different methods for assigning fuzzy memberships to the labeled samples. The membership assignment by the conventional fuzzy k-NN algorithm has a disadvantage in that it depends on the choice of some distance function To alleviate this drawback, Pham et al [9] proposed an optimally-weighted fuzzy k-NN approach. Author introduced a computational scheme for determining optimal weights which were used to improve the efficiency of the fuzzy k-NN approach. The final class label assignment was handled by Dempster’s rule of combination Another evidential theory-based k-NN approach, denoted by Ek-NN, has been proposed by Zouhal et al [11].

Related works
Proposed Neutrosophic-k-NN Classifier
Experimental Works
Method
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call