Abstract

Mathematical models are useful for public health planning and response to infectious disease threats. However, different models can provide differing results, which can hamper decision making if not synthesized appropriately. To address this challenge, multi-model hubs convene independent modeling groups to generate ensembles, known to provide more accurate predictions of future outcomes. Yet, these hubs are resource intensive, and how many models are sufficient in a hub is not known. Here, we compare the benefit of predictions from multiple models in different contexts: (1) decision settings that depend on predictions of quantitative outcomes (e.g., hospital capacity planning), where assessments of the benefits of multi-model ensembles have largely focused; and (2) decisions settings that require the ranking of alternative epidemic scenarios (e.g., comparing outcomes under multiple possible interventions and biological uncertainties). We develop a mathematical framework to mimic a multi-model prediction setting, and use this framework to quantify how frequently predictions from different models agree. We further explore multi-model agreement using real-world, empirical data from 14 rounds of U.S. COVID-19 Scenario Modeling Hub projections. Our results suggest that the value of multiple models could be different in different decision contexts, and if only a few models are available, focusing on the rank of alternative epidemic scenarios could be more robust than focusing on quantitative outcomes. Although additional exploration of the sufficient number of models for different contexts is still needed, our results indicate that it may be possible to identify decision contexts where it is robust to rely on fewer models, a finding that can inform the use of modeling resources during future public health crises.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call