Abstract

Hybrid machine learning (ML) models have exhibited great forecasting accuracy across all water-related fields, often showing promising greater performance and computational power over standalone ML algorithms. Meanwhile, transformers have demonstrated remarkable capabilities in natural language processing tasks due to their attention mechanism; their application to time series forecasting, particularly in hydrology or streamflow prediction, is an evolving area. This study compared the performance of Transformers to various hybrid deep learning models in forecasting river streamflow data in Syr Daria. The hybrid models included LSTM with attention mechanism (LSTM-AM), LSTM with Arima (LSTM-AR), and Convolutional Neural Networks combined with LSTM (ConvLSTM). The forecasting performance of each model was tested at three hydrological stations located upstream, midstream, and downstream along the Syr Darya River, respectively. The forecasting performance was evaluated by comparing RMSE, MAE, NSE and KGE values achieved by each model. The streamflow datasets exhibit short-term dependencies that LSTM models can capture effectively, while transformers are more parameter-intensive than LSTMs. Simpler models like LSTMs perform relatively well and achieve comparable predictive accuracy to hybrid models. While LSTM-based models are found to be better suited for short-term forecasting, the transformer model tends to excel in longer-term predictions as they are better at capturing long-range dependencies in sequences. Nonetheless, all the models exhibit a decrement in predictive performance with an increasing forecasting horizon. The findings of this study evidence the suitability of transformers for high-performance and budget-wise river flow forecast applications while minimising data processing time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call