Abstract

This paper investigates an effective boosting method for naïve Bayesian classifiers. Existing work has shown that the boosted naïve Bayesian classifier is not so effective in error rate reduction as the boosted decision tree (or boosted decision stump). This phenomenon may be caused by the combination of a couple of facts. To solve the problem, the local accuracies of a naïve Bayesian base classifier should be used to replace the global accuracy (or global error rate) in the traditional boosting methods. Based on the analysis, we propose an effective boosted naïve Bayesian method which uses a C4.5 decision tree as the local-accuracy evaluator for each base classifier. At each round, two classifiers are constructed: one for the naïve Bayesian base classifier, while the other for the C4.5 evaluator. The estimated local accuracy plays an important role, not only in updating the weights of training examples but also in determining the vote weights of base classifiers. Finally, it has been shown by experimental comparison that our method has achieved much lower error rate on average in a set of domains than the AdaBoost.M1 of naïve Bayesian classifiers.KeywordsNaïve Bayesian classifierAdaBoostboostinglocal accuracy estimation

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call