Abstract

Graphical Markov models, and above all Bayesian networks have become a very popular tool for multidimensional probability distribution representation and processing. The technique making computation with several hundred dimensional probability distribution possible was suggested by Lauritzen and Spiegelhalter. However, to employ it one has to transform a Bayesian network into a decomposable model. This is because decomposable models (or more precisely their building blocks, i.e., their low-dimensional marginals) can be reordered in many ways, so that each variable can be placed at the beginning of the model. It is not difficult to show that there is a much wider class of models possessing this property. In compositional models theory we call these models flexible. It is the widest class of models for which one can always restructure the model in the way that any variable can appear at the beginning of the model. But until recently it had been an open problem whether this class of models is closed under conditioning; i.e., whether a conditional of a flexible model is again flexible. In the paper we will show that this property holds true, which proves the importance of flexible models for practical applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call