Abstract

It is important to improve the forecasting performance of rainfall-runoff models due to the high complexity of basin response and frequent data limitations. Recently, many studies have been carried out based on deep learning and have achieved significant performance improvements. However, their intrinsic characteristics remain unclear and have not been explored. In this paper, we pioneered the exploitation of short lag-times in rainfall-runoff modeling and measured its influence on model performance. The proposed model, long short-term memory with attentive long and short lag-time (LSTM-ALSL), simultaneously and explicitly uses new data structures, i.e., long and short lag-times, to enhance rainfall-runoff forecasting accuracy by jointly extracting better features. In addition, self-attention is employed to model the temporal dependencies within long and short lag-times to further enhance the model performance. The results indicate that LSTM-ALSL yielded superior performance at four mesoscale stations (1846~9208 km2) with humid climates (aridity index 0.77~1.16) in the U.S.A., for both peak flow and base flow, with respect to state-of-the-art counterparts.

Highlights

  • Rainfall-runoff modeling is of great importance for water resource management practices, such as flood protection, reservoir operation, inland shipping, irrigation, and drought mitigation [1–6]

  • 8.1%, and the MAE is decreased by at least 10.6% relative to that of the LSTM model. This indicates that the short lag-time effectively improves the accuracy of runoff forecasting

  • The LSTM-ALSL model applies a self-attention mechanism to both the long lag-time input and short lag-time inputs

Read more

Summary

Introduction

Rainfall-runoff modeling is of great importance for water resource management practices, such as flood protection, reservoir operation, inland shipping, irrigation, and drought mitigation [1–6]. At the initial development stages of empirical models, linear data-driven methods, such as the autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA), were applied to runoff forecasting [13,14] These models perform well when the input runoff time series are near linear, but they cannot explore the nonlinear properties hidden in the time series [15]. Many nonlinear machine learning methods have been used in runoff forecasting such as artificial neural networks (ANNs) [16–18] (Dawson and Wilby, 2001; Nourani, 2017; Tokar and Johnson, 1999), genetic programming (GP) [19–21] (Chadalawada et al, 2020; Kashid et al, 2010; Khu et al, 2001), support vector machines (SVMs) [22], support vector regression (SVR) [23], self-organizing maps (SOMs) [24], adaptive neuro fuzzy inference systems (ANFISs) [25], random forests (RFs) [26,27], generalized regression neural networks (GRNNs), and extreme gradient boosting (XGBoost) [28] These nonlinear models capture more time series information and improve forecasting accuracy.

MMeetthodology
Forecasting Factor Selection
Optimization of Model Selection
Results and Discussion
Comparisons of Forecasting Results
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.