Abstract

Abstract. The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP) systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE), and the Shuffle Complex Evolution Metropolis (SCEM). Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA) model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

Highlights

  • In the classic definition, forecast verification is the process of assessing the skill of a forecast or set of forecasts (Murphy and Winkler, 1987; Jolliffe and Stephenson, 2003; Wilks, 2006)

  • The Shuffle Complex Evolution Metropolis (SCEM) parameter ensembles are larger than the Generalized Uncertainty Likelihood Estimator (GLUE) parameter ensembles, the range of the SCEM parameter ensemble is much narrower, spanning less than 30 % of the feasible parameter space at all sites

  • The range and inter-quartile range (IQR) of the parameter and discharge ensembles are compared in Fig. 2b and d, respectively

Read more

Summary

Introduction

Forecast verification is the process of assessing the skill of a forecast or set of forecasts (Murphy and Winkler, 1987; Jolliffe and Stephenson, 2003; Wilks, 2006). Verification methods have been well developed in the atmospheric sciences (Jolliffe and Stephenson, 2003; Wilks, 2006) and their application to hydrologic forecasts has been progressing in recent years, for probabilistic verification (Franz et al, 2003; Bradley et al, 2004; Verbunt et al, 2006; Laio and Tamea, 2007; Bartholmes et al, 2009; Renner et al, 2009; Brown et al, 2010; Demargne et al, 2010; Randrianasolo et al, 2010). One of the earliest attempts at verification was published by Finley (1884) who undertook an evaluation of the success of tornadoes forecasts His early (and controversial) work sparked interest and a range of alternative methods in probabilistic verification, many of which are in use today (Murphy, 1997). From early work by Finley (1884) to recent work by Bradley and Schwartz (2011), involve the comparison of a forecast (or set of forecasts) to the corresponding observation (Wilks, 2006). Murphy and Epstein (1967) lay out simple goals for forecast

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call