Abstract

Long term dependency capture is essentially important for time series prediction and spatial–temporal forecasting. In recent years, many deep learning-based forecasting methods have been proposed, leading to rapid development in this area. We systematically reviewed long-term dependency capture methods, including temporal dependency in sequence (named intra-sequence temporal dependency), temporal dependency out of sequence (named inter-sequence temporal dependency, in this scenario the long-term dependencies are split by many subsequences). Because the batch technique is widely adopted in machine learning and deep learning, the range of temporal capturing ability for many proposed methods is intra-sequence temporal dependency, which limits the capacity of long-term dependency capture. Aiming at the above problems, we designed three type memory mechanisms (i.e., a temporal encoding memory mechanism, a cross-sequence memory mechanism and a query-key based memory mechanism) to solve those long term dependency problems. Moreover, based on the cross-sequence memory mechanism and query-key architecture, an Attention-based Long-Term Dependency Capture model (ALTDC) is proposed for long-term dependency modeling and further solves the temporal dependency coherence problem. ALTDC includes temporal Transformer and spatial Transformer. The temporal Transformer adopts multi-head attention mechanism in temporal dimension and takes into consideration the relative position encoding. The spatial Transformer leverages attention mechanism in spatial dimension, and utilizes learnable position encoding and graph convolution to capture spatial relationships. Experiments demonstrate that the proposed model outperforms the state-of-art baselines on real world time-series datasets and spatial–temporal datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call