Abstract

Multivariate time series prediction is a very important task, which plays a huge role in climate, economy, and other fields. We usually use an Attention-based Encoder-Decoder network to deal with multivariate time series prediction because the attention mechanism makes it easier for the model to focus on the really important attributes. However, the Encoder-Decoder network has the problem that the longer the length of the sequence is, the worse the prediction accuracy is, which means that the Encoder-Decoder network cannot process long series and therefore cannot obtain detailed historical information. In this paper, we propose a dual-window deep neural network (DWNet) to predict time series. The dual-window mechanism allows the model to mine multigranularity dependencies of time series, such as local information obtained from a short sequence and global information obtained from a long sequence. Our model outperforms nine baseline methods in four different datasets.

Highlights

  • In the age of big data, sequence data is everywhere in life [1, 2]

  • As a widely used traditional time series prediction algorithm, ARIMA [7] has shown its effectiveness in many areas

  • ARIMA performs worse than other models for ARIMA cannot capture linear relationships and does not consider the spatial correlation between exogenous series [7]

Read more

Summary

Introduction

In the age of big data, sequence data is everywhere in life [1, 2]. Time series prediction algorithms are becoming more and more important in many areas, such as financial market prediction [3], passenger demand forecasting [4], and heart signal prediction [5]. As a widely used traditional time series prediction algorithm, ARIMA [7] has shown its effectiveness in many areas. ARIMA cannot model nonlinear relationships and can only be applied to stationary time series [8,9,10]. RNN has the problem of vanishing gradients, and it is difficult to capture the long-term dependence of time series [12]. Long Short-Term memory (LSTM) [13] and gated recurrent unit (GRU) [14, 15] alleviate the problem of RNN’s vanishing gradients and have developed many models for time series prediction, such as Encoder-Decoder networks [15, 16]. Encoder-Decoder networks are excellent in time series prediction tasks, especially Attention-based

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call