Abstract

In Data mining classification is one the most important technique. Today, we have data in abundant from numerous sources but in order to get meaningful information from it is very tedious task. Machine learning algorithms to train classifiers to decode the meaningful information from the data, this analysis approach has gained much popularity in recent years. This paper explores evaluation performance of Naive Bayes, Logistic Regression and Decision tree, Random forest using datasets (Pima Indian Diabetes data from UCI Repository). Naive Bayes algorithm is depending upon likelihood and probability; it is fast and stable to data changes. Logistic Regression, calculate the relationship of each feature and weights them based on their impact on result. Random forest algorithm is an ensemble algorithm, fits multiple trees with subset of data and averages tree result to improve performance and control over-fitting. Decision tree can be nicely visualized uses binary tree structure with each node making a decision depending upon the value of the feature. This paper concludes with a comparative evaluation of Naive Bayes, Logistic Regression, Decision tree and Random Forest in the context of Pima Indian Diabetes Dataset( take from UCI repository) in order to predict the diabetic patients. Keywords: Naive Bayes, Logistic Regression, Random Forest, Classification, Decision tree

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.