Abstract

Many calibrated hydrological models are inconsistent with the behavioral functions of catchments and do not fully represent the catchments’ underlying processes despite their seemingly adequate performance, if measured by traditional statistical error metrics. Using such metrics for calibration is hindered if only short-term data are available. This study investigated the influence of varying lengths of streamflow observation records on model calibration and evaluated the usefulness of a signature-based calibration approach in conceptual rainfall-runoff model calibration. Scenarios of continuous short-period observations were used to emulate poorly gauged catchments. Two approaches were employed to calibrate the HBV model for the Brue catchment in the UK. The first approach used single-objective optimization to maximize Nash–Sutcliffe efficiency (NSE) as a goodness-of-fit measure. The second approach involved multiobjective optimization based on maximizing the scores of 11 signature indices, as well as maximizing NSE. In addition, a diagnostic model evaluation approach was used to evaluate both model performance and behavioral consistency. The results showed that the HBV model was successfully calibrated using short-term datasets with a lower limit of approximately four months of data (10% FRD model). One formulation of the multiobjective signature-based optimization approach yielded the highest performance and hydrological consistency among all parameterization algorithms. The diagnostic model evaluation enabled the selection of consistent models reflecting catchment behavior and allowed an accurate detection of deficiencies in other models. It can be argued that signature-based calibration can be employed for building adequate models even in data-poor situations.

Highlights

  • The selection process yielded 11 hydrological signatures listed in Table 1: three signatures extracted from three segments of the failed to reflect the (FDC), four signatures related to streamflow and precipitation, and four signatures characterizing the discharge statistics

  • Was used in all experiments, whereas the was used for cali30 June 1998 23:00) was used in all experiments, whereas the full dataset (FD) was used for calibration

  • 1.7 (RMSE) values in the calibration and validation periods were small, ranging between mm, except for the 5%-FRD model showing a 5.65-mm Root mean square (RMSE) in the validation period and 1.7 mm, except for the 5%-FRD model showing a 5.65-mm RMSE in the validation (Table 5)

Read more

Summary

Introduction

Model calibration in a hydrological modeling context entails finding the most appropriate set of parameters to obtain the best model outputs resembling the observed system’s behavior. Model calibration can be performed manually; it is an inefficient method because it is time-consuming and depends on the modeler’s experience. Much effort has been made over the past decades to develop effective and efficient calibration methods such as automated (computer-based) calibration, especially in the view of advances in computer technology and algorithmic support for solving optimization problems [1,2]. The most widely used metrics are borrowed from classical statistical approaches, such as minimizing squared residuals (the difference between the observations and model simulation outputs), maximizing the correlation coefficient, or aggregating several metrics such as the Kling–Gupta efficiency [1,3,4,5]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call