Abstract

Study regionIn the Yangtze River basin of China. Study focusWe applied a recently popular deep learning (DL) algorithm, Transformer (TSF), and two commonly used DL methods, Long-Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), to evaluate the performance of TSF in predicting runoff in the Yangtze River basin. We also add the main structure of TSF, Self-Attention (SA), to the LSTM and GRU models, namely LSTM-SA and GRU-SA, to investigate whether the inclusion of the SA mechanism can improve the prediction capability. Seven climatic observations (mean temperature, maximum temperature, precipitation, etc.) are the input data in our study. The whole dataset was divided into training, validation and test datasets. In addition, we investigated the relationship between model performance and input time steps. New hydrological insights for the regionOur experimental results show that the GRU has the best performance with the fewest parameters while the TSF has the worst performance due to the lack of sufficient data. GRU and the LSTM models are better than TSF for runoff prediction when the training samples are limited (such as the model parameters being ten times larger than the samples). Furthermore, the SA mechanism improves the prediction accuracy when added to the LSTM and the GRU structures. Different input time steps (5 d, 10 d, 15 d, 20 d, 25 d and 30 d) are used to train the DL models with different prediction lengths to understand their relationship with model performance, showing that an appropriate input time step can significantly improve the model performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call