Abstract

Naïve Bayes is a prediction method that contains a simple probabilistic that is based on the application of the Bayes theorem (Bayes rule) with the assumption that the dependence is strong. K-Nearest Neighbor (K-NN) is a group of instance-based learning, K-NN is also a lazy learning technique by searching groups of k objects in training data that are closest (similar) to objects on new data or testing data. Classification is a technique in Data mining to form a model from a predetermined data set. Data mining techniques are the choices that can be overcome in solving this problem. The results of the two different classification algorithms result in the discovery of better and more efficient algorithms for future use. It is recommended to use different datasets to analyze comparisons of naïve bayes and K-NN algorithms. the writer formulates the problem so that the research becomes more directed. The formulation of the problem in this study is to find the value of accuracy in the Naïve Bayes and KNN algorithms in classifying data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call