Abstract A user-focused verification approach for evaluating probability forecasts of binary outcomes (also known as probabilistic classifiers) is demonstrated that (i) is based on proper scoring rules, (ii) focuses on user decision thresholds, and (iii) provides actionable insights. It is argued that when categorical performance diagrams and the critical success index are used to evaluate overall predictive performance, rather than the discrimination ability of probabilistic forecasts, they may produce misleading results. Instead, Murphy diagrams are shown to provide a better understanding of the overall predictive performance as a function of user probabilistic decision threshold. We illustrate how to select a proper scoring rule, based on the relative importance of different user decision thresholds, and how this choice impacts scores of overall predictive performance and supporting measures of discrimination and calibration. These approaches and ideas are demonstrated using several probabilistic thunderstorm forecast systems as well as synthetic forecast data. Furthermore, a fair method for comparing the performance of probabilistic and categorical forecasts is illustrated using the fixed risk multicategorical (FIRM) score, which is a proper scoring rule directly connected to values on the Murphy diagram. While the methods are illustrated using thunderstorm forecasts, they are applicable for evaluating probabilistic forecasts for any situation with binary outcomes. Significance Statement Recently, several papers have presented verification results for probabilistic forecasts using so-called categorical performance diagrams, which summarize multiple verification metrics. While categorical performance diagrams measure discrimination ability, we demonstrate how they can potentially lead to incorrect conclusions when evaluating overall predictive performance of probabilistic forecasts. By reviewing recent advances in the statistical literature, we show a comprehensive approach for the meteorological community that (i) does not reward a forecaster who “hedges” their forecast, (ii) focuses on the importance of the forecast user’s decision threshold(s), and (iii) provides actionable insights. Additionally, we present an approach for fairly comparing the skill of categorical forecasts to probabilistic forecasts.
Read full abstract