Abstract

Recently, transformer-based models have exhibited great performance in multi-horizon time series forecasting tasks. However, the core module of these models, the self-attention mechanism, is insensitive to the temporal order and suffers from attention dispersion over long time sequences. These limitations hinder the models from fully leveraging the features of time series data, particularly the periodicity. Furthermore, the lack of consideration for temporal order also hinders the identification of important temporal variables in transformers. To resolve these problems, this article develops an attention based deep learning model that can better utilize periodicity to improve prediction accuracy and enhance interpretability. We design a parallel skip LSTM module and a periodicity information utilization module to reinforce the connection between corresponding time steps within different periods and solve the problem of excessively sparse attention. An improved variable selection mechanism is embedded to the parallel skip LSTM such that temporal information can be considered when analyzing interpretability. The experimental findings on different types of real datasets show that the proposed model outperforms numerous baseline models in terms of prediction accuracy while obtaining certain interpretability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call