Abstract

Forecasters predicting the chances of a future event may disagree due to differing evidence or noise. To harness the collective evidence of the crowd, we propose a Bayesian aggregator that is regularized by analyzing the forecasters’ disagreement and ascribing over-dispersion to noise. Our aggregator requires no user intervention and can be computed efficiently even for a large numbers of predictions. To illustrate, we evaluate our aggregator on subjective probability predictions collected during a four-year forecasting tournament sponsored by the US intelligence community. Our aggregator improves the squared error (a.k.a, the Brier score) of simple averaging by around 20% and other commonly used aggregators by 10 − 25%. This advantage stems almost exclusively from improved calibration. An R package called braggR implements our method and is available on CRAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call