Abstract

This study was conducted to assess the performance of a long short-term memory algorithm (LSTM), which was suitable for time series prediction, in the multivariate dataset with missing values. The full dataset for the adopted LSTM model was prepared by running a popular watershed model Hydrological Simulation Program-Fortran (HSPF) in the upper Nam River Basin for 3 years from 2016 to 2018, excluding a one-year warm-up period, on a daily time step. The accuracy of prediction for the LSTM model was evaluated in response to various interpolation methods as well as changes in the number of missing values (for dependent variables) and independent variables (containing a fixed number of missing values for either single or multiple variables). Note that the entire dataset is divided into training and test datasets at a ratio of 7:3. Results showed that different interpolation methods resulted in a considerable variation in performance of the LSTM model. Out of them, StructTS and RPART were selected as the best imputation methods recovering missing values for discharge and total phosphorus, respectively. The prediction error of the LSTM model increased gradually with increasing the number of missing values from 300 to 700. The LSTM model, however, appeared to maintain its performance fairly well even in data sets with a large amount of missing values as long as adequate interpolation methods were adopted for each dependent variable. The performance of the LSTM model degraded further as the number of independent variables containing the fixed number of missing values increased from 1 to 7. We believe that the proposed methodology can be used not only to reconstruct missing values in a real-time monitoring dataset with excellent performance, but also to improve the accuracy of prediction for (time series) deep learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call