Abstract

How do you make inferences from a Bayesian network (BN) model with missing information? For example, we may not have priors for some variables or may not have conditionals for some states of the parent variables. It is well-known that the Dempster-Shafer (D-S) belief function theory is a generalization of probability theory. So, a solution is to embed an incomplete BN model in a D-S belief function model, omit the missing data, and then make inferences from the belief function model. We will demonstrate this using an implementation of a local computation algorithm for D-S belief function models called the “Belief function machine.” One advantage of this approach is that we get interval estimates of the probabilities of interest. Using Laplacian (equally likely) or maximum entropy priors or conditionals for missing data in a BN may lead to point estimates for the probabilities of interest, masking the uncertainty in these estimates. Bayesian reasoning cannot reason from an incomplete model. A Bayesian sensitivity analysis of the missing parameters is not a substitute for a belief-function analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call