Abstract

Bayesian Networks are widely accepted as efficient tools to represent causal models for decision making under uncertainty. In some applications, networks are built where the conditional probability tables are not derived from scientific laws but rely on expert knowledge. Such applications require assessment as to whether the knowledge representation is precise enough to infer reliable results. The uncertainty representation and reasoning evaluation framework (URREF) ontology offers a unified framework for the objective assessment of uncertainty representation and reasoning. This paper addresses the analysis of uncertainty in Bayesian networks (BNs) and develops metrics for URREF criteria based on the principle of entropy. BNs uncertainty includes variable transformation (accuracy), model structure (precision), and reasoning (probability distribution interpretations). The set of metrics are used to investigate a practical use case for probabilistic modeling of cyber threat analysis, and are correlated to a set of complementary metrics already described in a former contribution. The goal of the paper is to provide a new set of metrics able to assess, for a specific model and given input sources, the quality of results of BN-based inferences, in terms of accuracy, precision and end-user interpretation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.