Recent studies have shown that Bayesian network classifiers (BNCs) are powerful tools for knowledge representation and classification, and averaged one-dependence estimators (AODE) is one of the most popular and effective BNCs since it can achieve the tradeoff between bias and variance due to its independence assumptions and ensemble learning strategy. However, unverified independence assumptions may result in biased estimates of probability distribution and then degradation in classification performance. In this paper, we prove theoretically the uncertainty of probability-theoretic independence and propose to measure the independence between attribute values implicated in specific instance. The estimates of conditional probability can be finely tuned based on point-wise independence analysis. Point-wise log likelihood function is then applied as weighting metric for committee members of AODE to improve the estimate of joint probability. Extensive experiments on 36 benchmark datasets show that, compared to other state-of-the-art classifiers, weighted one-dependence estimators using point-wise independence analysis can achieve competitive classification performance in terms of zero-one loss, RMSE, bias-variance decomposition and conditional log likelihood.
Read full abstract