Abstract

Expert assessments are routinely used to inform management and other decision making. However, often these assessments contain considerable biases and uncertainties for which reason they should be calibrated if possible. Moreover, coherently combining multiple expert assessments into one estimate poses a long-standing problem in statistics since modeling expert knowledge is often difficult. Here, we present a hierarchical Bayesian model for expert calibration in a task of estimating a continuous univariate parameter. The model allows experts’ biases to vary as a function of the true value of the parameter and according to the expert’s background. We follow the fully Bayesian approach (the so-called supra-Bayesian approach) and model experts’ bias functions explicitly using hierarchical Gaussian processes. We show how to use calibration data to infer the experts’ observation models with the use of bias functions and to calculate the bias corrected posterior distributions for an unknown system parameter of interest. We demonstrate and test our model and methods with simulated data and a real case study on data-limited fisheries stock assessment. The case study results show that experts’ biases vary with respect to the true system parameter value and that the calibration of the expert assessments improves the inference compared to using uncalibrated expert assessments or a vague uniform guess. Moreover, the bias functions in the real case study show important differences between the reliability of alternative experts. The model and methods presented here can be also straightforwardly applied to other applications than our case study.

Highlights

  • Expert elicitation is an important part of statistical analyses in various fields of research and decision making (O’Hagan et al, 2006; Dias et al, 2018; Albert et al, 2012)

  • How the assessments of multiple experts should be utilized in statistical inference and decision making so that the uncertainties and possible systematic errors or biases in the assessments are properly accounted for (Tversky and Kahneman, 1974; Lindley et al, 1979; O’Hagan et al, 2006; Burgman et al, 2011; Dias et al, 2018)? Even though these issues are intertwined, here we focus on the latter paying special attention to considering the biases, or in other words, to the calibration of experts’ assessments

  • We examined the calibration of the posterior distributions by calculating the coverage of the 50%, 75% and 90% central probability intervals (CPI)

Read more

Summary

Introduction

Expert elicitation is an important part of statistical analyses in various fields of research and decision making (O’Hagan et al, 2006; Dias et al, 2018; Albert et al, 2012). Expert knowledge is often a valuable, and sometimes even the only available source of information, its successful utilization in decision making immediately raises at least two practical questions. How the assessments of multiple experts should be utilized in statistical inference and decision making so that the uncertainties and possible systematic errors or biases in the assessments are properly accounted for (Tversky and Kahneman, 1974; Lindley et al, 1979; O’Hagan et al, 2006; Burgman et al, 2011; Dias et al, 2018)? How the assessments of multiple experts should be utilized in statistical inference and decision making so that the uncertainties and possible systematic errors or biases in the assessments are properly accounted for (Tversky and Kahneman, 1974; Lindley et al, 1979; O’Hagan et al, 2006; Burgman et al, 2011; Dias et al, 2018)? Even though these issues are intertwined, here we focus on the latter paying special attention to considering the biases, or in other words, to the calibration of experts’ assessments

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call