Abstract

Coronavirus disease 2019 (COVID-19) forecasts from over 100 models are readily available. However, little published information exists regarding the performance of their uncertainty estimates (i.e. probabilistic performance). To evaluate their probabilistic performance, we employ the classical model (CM), an established method typically used to validate expert opinion. In this analysis, we assess both the predictive and probabilistic performance of COVID-19 forecasting models during 2021. We also compare the performance of aggregated forecasts (i.e. ensembles) based on equal and CM performance-based weights to an established ensemble from the Centers for Disease Control and Prevention (CDC). Our analysis of forecasts of COVID-19 mortality from 22 individual models and three ensembles across 49 states indicates that—(i) good predictive performance does not imply good probabilistic performance, and vice versa; (ii) models often provide tight but inaccurate uncertainty estimates; (iii) most models perform worse than a naive baseline model; (iv) both the CDC and CM performance-weighted ensembles perform well; but (v) while the CDC ensemble was more informative, the CM ensemble was more statistically accurate across states. This study presents a worthwhile method for appropriately assessing the performance of probabilistic forecasts and can potentially improve both public health decision-making and COVID-19 modelling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call