The challenge problems for the Epistemic Uncertainty Workshop at Sandia National Laboratories provide common ground for comparing different mathematical theories of uncertainty, referred to as General Information Theories (GITs). These problems also present the opportunity to discuss the use of expert knowledge as an important constituent of uncertainty quantification. More specifically, how do the principles and methods of eliciting and analyzing expert knowledge apply to these problems and similar ones encountered in complex technical problem solving and decision making? We will address this question, demonstrating how the elicitation issues and the knowledge that experts provide can be used to assess the uncertainty in outputs that emerge from a black box model or computational code represented by the challenge problems. In our experience, the rich collection of GITs provides an opportunity to capture the experts' knowledge and associated uncertainties consistent with their thinking, problem solving, and problem representation. The elicitation process is rightly treated as part of an overall analytical approach, and the information elicited is not simply a source of data. In this paper, we detail how the elicitation process itself impacts the analyst's ability to represent, aggregate, and propagate uncertainty, as well as how to interpret uncertainties in outputs. While this approach does not advocate a specific GIT, answers under uncertainty do result from the elicitation.
Read full abstract