Abstract

Abstract This note describes how the widely used Brier and ranked probability skill scores (BSS and RPSS, respectively) can be correctly applied to quantify the potential skill of probabilistic multimodel ensemble forecasts. It builds upon the study of Weigel et al. where a revised RPSS, the so-called discrete ranked probability skill score (RPSSD), was derived, circumventing the known negative bias of the RPSS for small ensemble sizes. Since the BSS is a special case of the RPSS, a debiased discrete Brier skill score (BSSD) could be formulated in the same way. Here, the approach of Weigel et al., which so far was only applicable to single model ensembles, is generalized to weighted multimodel ensemble forecasts. By introducing an “effective ensemble size” characterizing the multimodel, the new generalized RPSSD can be expressed such that its structure becomes equivalent to the single model case. This is of practical importance for multimodel assessment studies, where the consequences of varying effective ensemble size need to be clearly distinguished from the true benefits of multimodel combination. The performance of the new generalized RPSSD formulation is illustrated in examples of weighted multimodel ensemble forecasts, both in a synthetic random forecasting context, and with real seasonal forecasts of operational models. A central conclusion of this study is that, for small ensemble sizes, multimodel assessment studies should not only be carried out on the basis of the classical RPSS, since true changes in predictability may be hidden by bias effects—a deficiency that can be overcome with the new generalized RPSSD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call