Abstract

Forecasters predicting the chances of a future event may disagree due to differing evidence or noise. To harness the collective evidence of the crowd, we propose a Bayesian aggregator that is regularized by analyzing the forecasters’ disagreement and ascribing over-dispersion to noise. Our aggregator requires no user intervention and can be computed efficiently even for a large numbers of predictions. To illustrate, we evaluate our aggregator on subjective probability predictions collected during a four-year forecasting tournament sponsored by the US intelligence community. Our aggregator improves the squared error (a.k.a, the Brier score) of simple averaging by around 20% and other commonly used aggregators by 10 − 25%. This advantage stems almost exclusively from improved calibration. An R package called braggR implements our method and is available on CRAN.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.