Abstract

Spatio-temporal graph neural networks (GNN) that model inter-variable dependencies with learned graph structures for multivariate time series forecasting have received attention due to their superior performance. However, existing methods exhibit quadratic complexity, limiting the GNN’s capability to handle a large number of variables. Moreover, the message-passing mechanism employed by spatio-temporal GNNs can be seen as a type of Laplacian smoothing, which weakens the inherent evolution patterns of nodes and overlooks the dynamic impact of a node’s own information and that of its neighbors on future outcomes. In this paper, we propose a novel approach called the Dynamic Personalized Graph Neural Network (DPGNN), which presents a graph learning framework with linear complexity. To facilitate linearization, we conceptualize the spatio-temporal GNN as an information flow from the source (raw node features) to the sink (new node features). And the flow conservation principle is employed to generate the information attention mechanism to effectively model the dependency between variables. To address the second issue, inspired by PageRank, we design a dynamic personalized graph convolution scheme that explicitly quantifies the impact of each node on the subsequent results based on the dynamic graph signals at each propagation step. Experimental results on six benchmark datasets showcase the superiority of our approach over state-of-the-art methods, including both pre-defined graph structures and graph learning techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call