Abstract

Naive Bayes classifier has been extensively applied in various domains in the past few decades due to its simple structure and remarkable predictive performance. However, it is based on a strong assumption which confines its usage for many real-world applications; conditional independence of attributes given class information. In this paper, we propose mixture of latent multinomial naive Bayes (MLMNB) classifier as an extension of naive Bayes to relax the independence assumption. MLMNB incorporates a latent variable in a predefined Bayesian network structure to model the dependencies among attributes, yet avoids burden complexities of structural learning approaches. We theoretically prove that MLMNB automatically shrinks to naive Bayes classifier whenever conditional independence assumption holds. Expectation-maximization (EM) algorithm is modified for the parameter estimation. The experimental results on 36 datasets from the University of California, Irvine (UCI) machine learning repository show that MLMNB achieves a substantial predictive performance as compared with the state-of-the-art modifications of naive Bayes classifier, in terms of classification accuracy (ACC), conditional log-likelihood (CLL), and area under the ROC curve (AUC).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call