Abstract

In this study, the uncertainty in runoff simulations using hydrological models was quantified based on the selection of five evaluation metrics and calibration data length. The calibration data length was considered to vary from 1 to 11 years, and runoff analysis was performed using a soil and water assessment tool (SWAT). SWAT parameter optimization was then performed using R-SWAT. The results show that the uncertainty was lower when using a calibration data length of five to seven years, with seven years achieving the lowest uncertainty. Runoff simulations using a calibration data length of more than seven years yielded higher uncertainty overall but lower uncertainty for extreme runoff simulations compared to parameters with less than five years of calibration data. Different uncertainty evaluation metrics show different levels of uncertainty, which means it is necessary to consider multiple evaluation metrics rather than relying on any one single metric. Among the evaluation metrics, the Nash–Sutcliffe model efficiency coefficient (NSE) and normalized root-mean-squared error (NRMSE) had large uncertainties at short calibration data lengths, whereas the Kling–Gupta efficiency (KGE) and Percent Bias (Pbias) had large uncertainties at long calibration data lengths.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.