There is a widely spread belief in the Bayesian network (BN) community that the overall accuracy of results of BN inference is not too sensitive to the precision of their parameters. We present the results of several experiments in which we put this belief to a test in the context of medical diagnostic models. We study the deterioration of accuracy under random symmetric noise but also biased noise that represents overconfidence and underconfidence of human experts.Our results demonstrate consistently, across all models studied, that while noise leads to deterioration of accuracy, small amounts of noise have minimal effect on the diagnostic accuracy of BN models. Overconfidence, common among human experts, appears to be safer than symmetric noise and much safer than underconfidence in terms of the resulting accuracy. Noise in medical laboratory results and disease nodes as well as in nodes forming the Markov blanket of the disease nodes has the largest effect on accuracy. In light of these results, knowledge engineers should moderately worry about the overall quality of the numerical parameters of BNs and direct their effort where it is most needed, as indicated by sensitivity analysis.
Read full abstract