Abstract

The notion of fair scores for ensemble forecasts was introduced recently to reward ensembles with members that behave as though they and the verifying observation are sampled from the same distribution. In the case of forecasting binary outcomes, a characterization is given of a general class of fair scores for ensembles that are interpreted as random samples. This is also used to construct classes of fair scores for ensembles that forecast multicategory and continuous outcomes. The usual Brier, ranked probability and continuous ranked probability scores for ensemble forecasts are shown to be unfair, while adjusted versions of these scores are shown to be fair. A definition of fairness is also proposed for ensembles with members that are interpreted as being dependent and it is shown that fair scores exist only for some forms of dependence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call