Abstract

In Bayes score-based Bayesian network structure learning (BNSL), we are to specify two prior probabilities: over the structures and over the parameters. In this paper, we mainly consider the parameter priors, in particular for the BDeu (Bayesian Dirichlet equivalent uniform) and Jeffreys’ prior. In model selection, given examples, we typically consider how well a model explains the examples, how simple the model is, and choose the best one for the criteria. In this sense, if a model A is better than another model B for both of the two criteria, it is reasonable to choose the model A. In this paper, we prove that the BDeu violates such a regularity and that we will face a fatal situation in BNSL: the BDeu tends to add a variable to the current parent set of a variable X even when the conditional entropy reaches zero. In general, priors should be reflected by the learner’s belief and should not be rejected from a general point of view. However, this paper suggests that the underlying belief of the BDeu contradicts our intuition in some cases, which has not been known until this paper appears.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call