Abstract

The study investigates three classification algorithms, namely K-Nearest Neighbor (K-NN), Naïve Bayes, and Decision Tree, for the classification of Diabetes Mellitus using a dataset from Kaggle. K-NN relies on distance calculations between test and training data, using the Euclidean distance formula. The choice of k, representing the nearest neighbor, significantly influences K-NN's effectiveness. Naïve Bayes, a probabilistic method, predicts class probabilities based on past events, and it employs the Gaussian distribution method for continuous data. Decision Trees, form prediction models with easily implementable rules. Data collection involves obtaining a Diabetes Mellitus dataset with eight attributes. Data preprocessing includes cleaning and normalization to minimize inconsistencies and incomplete data. The classification algorithms are applied using the Rapidminer tool, and the results are compared for accuracy. Naïve Bayes yields 77.34% accuracy, K-NN performance depends on the chosen k value, and Decision Trees generate rules for classification. The study provides insights into the strengths and weaknesses of each algorithm for diabetes classification

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call