Abstract

Decision Making Under Deep Uncertainty often uses prohibitively large scenario ensembles to calculate robustness and rank policies’ performance. This paper contributes a framework using subsampling algorithms and space-filling metrics to determine how smaller ensemble sizes impact the accuracy of robustness rankings. Subsampling methods create smaller scenario ensembles of varying sizes. We evaluate ranking sensitivity to the ensemble size and calculate accuracy relative to a baseline ranking. Then, metrics of scenario set quality predict ranking accuracy. Notably, the metrics and subsampling methods do not require additional model simulations. We demonstrate the framework with a case study of shortage policies for Lake Mead in the Colorado River Basin (CRB). Results suggest that fewer scenarios than previous studies can accurately rank Lake Mead policies, and results depend on the type of objective and robustness metric. Smaller ensembles could reduce the computational burden of robustness analyses in the ongoing CRB policy renegotiation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call