Abstract

Is forecasting ability a stable trait? While domain knowledge and reasoning abilities are necessary for making accurate forecasts, research shows that knowing how accurate forecasters have been in the past is the best predictor of future accuracy. However, unlike the measurement of other traits, evaluating forecasting skill requires substantial time investment. Forecasters must make predictions about events that may not resolve for many days, weeks, months, or even years into the future before their accuracy can be estimated. Our work builds upon methods such as cultural consensus theory and proxy scoring rules to show talented forecasters can be discriminated in real time, without requiring any event resolutions. We define a peer similarity-based intersubjective evaluation method and test its utility in a unique longitudinal forecasting experiment. Because forecasters predicted all events at the same points in time, many of the confounds common to forecasting tournaments or observational data were eliminated. This allowed us to demonstrate the effectiveness of our method in real time, as time progressed and more information about forecasters became available. Intersubjective accuracy scores, which can be obtained immediately after the forecasts are made, were both valid and reliable estimators of forecasting talent. We also found that asking forecasters to make meta-predictions about what they expect others to believe can serve as an incentive-compatible method of intersubjective evaluation. Our results indicate that selecting small groups of, or even single forecasters, based on intersubjective accuracy can yield subsequent forecasts that approximate the actual accuracy of much larger crowd aggregates. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call