Abstract

Bayesian networks (BNs) have attained widespread use in data analysis and decision making. Well-studied topics include efficient inference, evidence propagation, parameter learning from data for complete and incomplete data scenarios, expert elicitation for calibrating BN probabilities, and structure learning. It is common for the researcher to assume the structure of the BN or to glean the structure from expert elicitation or domain knowledge. In this scenario, the model may be calibrated through learning the parameters from relevant data. There is a lack of work on model diagnostics for fitted BNs; this is the contribution of this article. We key on the definition of (conditional) independence to develop a graphical diagnostic that indicates whether the conditional independence assumptions imposed, when one assumes the structure of the BN, are supported by the data. We develop the approach theoretically and describe a Monte Carlo method to generate uncertainty measures for the consistency of the data with conditional independence assumptions under the model structure. We describe how this theoretical information and the data are presented in a graphical diagnostic tool. We demonstrate the approach through data simulated from BNs under different conditional independence assumptions. We also apply the diagnostic to a real-world dataset. The results presented in this article show that this approach is most feasible for smaller BNs—this is not peculiar to the proposed diagnostic graphic, but rather is related to the general difficulty of combining large BNs with data in any manner (such as through parameter estimation). It is the authors’ hope that this article helps highlight the need for more research into BN model diagnostics. This article has supplementary materials online.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call