Abstract

Decision makers often rely on expert opinion when making forecasts under uncertainty. In doing so, they confront two methodological challenges: the elicitation problem, which requires them to extract meaningful information from experts; and the aggregation problem, which requires them to combine expert opinion by resolving disagreements. Linear averaging is a justifiably popular method for addressing aggregation, but its robust simplicity makes two requirements on elicitation. First, each expert must offer probabilistically coherent forecasts; second, each expert must respond to all our queries. In practice, human judges (even experts) may be incoherent, and may prefer to assess only the subset of events about which they are comfortable offering an opinion. In this paper, a new methodology is developed for combining expert assessment of chance. The method retains the conceptual and computational simplicity of linear averaging, but generalizes the standard approach by relaxing the requirements on expert elicitation. The method also enjoys provable performance guarantees, and in experiments with real-world forecasting data is shown to offer both computational efficiency and competitive forecasting gains as compared to rival aggregation methods. This paper is relevant to the practice of decision analysis, for it enables an elicitation methodology in which judges have freedom to choose the events they assess.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.