Abstract

The efficiency of algorithms for probabilistic inference in Bayesian networks can be improved by exploiting independence of causal influence. The factorized representation of independence of causal influence offers a factorized decomposition of certain independence of causal influence models. We describe how LAZY propagation - a junction tree based inference algorithm - easily can be extended to take advantage of the decomposition offered by the factorized representation. We introduce two extensions to the factorized representation easing the knowledge acquisition task and reducing the space complexity of the representation exponentially in the state space size of the effect variable of an independence of causal influence model. We describe how the factorized representation can be used to solve tasks such as calculating the maximum a posteriori hypotheses, the maximum expected utility, and the most probable configuration. Finally, the results of an empirical evaluation indicate that considerable performance improvements can be obtained using LAZY propagation combined with the factorized representation compared to LAZY propagation performed in junction trees constructed after either parent divorcing or temporal transformation have been applied to the Bayesian network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call