Abstract

Ever since the success of naive Bayes (NB) in achieving excellent classification performance and the least computational overhead, more and more researchers have focused their attention on the Bayesian network classifiers (BNCs). Among numerous approaches to refining NB, averaged one-dependence estimators (AODE) achieves excellent classification performance although its discriminative independence assumption for each member rarely holds in practice. Robust AODE with high expressivity and low bias is in urgent need with the ever increasing data quantity. In this paper, the log likelihood function $LL({\mathscr{B}}|D)$ is introduced to measure the number of bits which is encoded in the network topology ${\mathscr{B}}$ for describing training data D. An efficient heuristic search strategy is applied to maximize $LL({\mathscr{B}}|D)$ and relax the independence assumption of AODE by exploring higher-order conditional dependencies between attributes. The proposed approach, averaged tree-augmented one-dependence estimators (ATODE), inherits the effectiveness of AODE and gains more flexibility for modelling higher-order dependencies. The extensive experimental comparison results on 36 datasets demonstrate that, compared to state-of-the-art learners including single-model BNCs (e.g., CFWNB and SKDB) and variants of AODE (e.g., TAODE), our proposed out-of-core learner can achieve competitive or better classification performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call