Abstract

AbstractFrom regional to continental scales, hydrologic processes are represented by modular modeling frameworks dependent on input datasets and parameter sets representing physical attributes. The hydrologic community needs a common procedure to evaluate model output based on parameter sensitivities and uncertainties compared to performance metrics (i.e., objective functions), especially for large parameter sets. We developed a reproducible workflow for evaluating hydrologic models to objectively analyze model outputs as a function of parameter choice using numerical and visualization techniques. Our workflow was implemented on three separate case studies, each with a different hydrologic model, and the results can be reproduced and visualized from a community github code repository. Model parameter sensitivity was evaluated using several global sensitivity indices and Bayesian theory. Uncertainty in parameter spaces was quantified to highlight the impact of unreliable input data on model output. Model parameter sensitivities and uncertainties were evaluated numerically and visually to provide a comprehensive perspective on their impacts on model output. For each case study, we provided a summary and interpretation of the workflow results. Our workflow can be integrated into hydrologic modeling frameworks for objective modular model and parameter set evaluations based on a data‐driven approach appropriate for model selection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call