Abstract

For learning Bayesian Network (BN) structures, it has become common practice to use the Bayesian Dirichlet (BD) scoring criterion. In contrast to most other scoring metrics that functionally can be interpreted as regularized maximum likelihood criteria, the BD metric cannot be considered as such. The functional dissimilarity of the BD metric compared to other metrics is an obstacle from an analytical point of view; this is for instance becomes clear in the context of the structural EM algorithm for learning BNs from incomplete data. Also, it is not easy to pin-point why exactly and to what extend regularization is taken care of by applying the BD metric. We introduce a Bayesian scoring criterion that is closely related to the BD metric, but solves the obvious disadvantages of the BD metric. We arrive at this result by using the same basic assumptions as for the BD metric, but in contrast to the BD metric, where focus is on learning the model structure only, we aim at learning the most probable BN pair jointly, i.e., model structure and the parameter are selected as a pair. This approach yields a scoring metric that has the functional form of a regularized maximum likelihood metric. We perform experiments, and show that this MAP BN metric also yields better results than the BIC and BD metrics on independent test data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call