Abstract

This technical note, based on the more comprehensive note, Eliciting and Evaluating Expert (UVA-QA-0734), provides a streamlined presentation of Brier and log scores as tools for assessing forecasting records among a pool of experts. The note is designed to be used in conjunction with a forecasting exercise. Excerpt UVA-QA-0772 Rev. Sept. 12, 2014 Scoring Expert FORECASTS Evaluating the forecasts of others can be a difficult task. One approach is to score an expert's forecast once the realization of the uncertainty is known. A track record of high scores on multiple forecasts may yield important insights into the expertise an individual possesses. In this note, we describe several scoring rules for evaluating expert opinion. Scoring Forecasts of Discrete Events Scoring rules first appeared in the 1950s to evaluate meteorological forecasts. Since that time, scoring rules have found a wide variety of applications in business and other fields. To this day, meteorologists in the United States are evaluated using a Brier scoring rule. When a discrete uncertainty has only two possible outcomes (e.g., rain/no rain), the Brier scoring rule assigns a score of –(1 – p)2, where p is the probability forecast reported for the event that occurs. . . .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call