Abstract

In 2018, Jiroušek and Shenoy proposed a definition of entropy for Dempster-Shafer (D-S) belief functions called decomposable entropy (d-entropy). This paper provides an algorithm for computing the d-entropy of directed graphical D-S belief function models. We illustrate the algorithm using Almond's Captain's Problem example. For belief function undirected graphical models, assuming that the set of belief functions in the model is non-informative, the belief functions are distinct. We illustrate this using Haenni-Lehmann's Communication Network problem. As the joint belief function for this model is quasi-consonant, it follows from a property of d-entropy that the d-entropy of this model is zero, and no algorithm is required. For a class of undirected graphical models, we provide an algorithm for computing the d-entropy of such models. Finally, the d-entropy coincides with Shannon's entropy for the probability mass function of a single random variable and for a large multi-dimensional probability distribution expressed as a directed acyclic graph model called a Bayesian network. We illustrate this using Lauritzen-Spiegelhalter's Chest Clinic example represented as a belief-function directed graphical model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call