Abstract

Earlier works on dynamic spatial-temporal data modelling prefer using spatial-temporal factorized graph convolutional networks (GCNs), which are easy to interpret but fail to capture joint spatial-temporal correlations. Thus, lots of subsequent research focus on constructing a localized adjacent matrix to capture joint features from both spatial and temporal dimension simultaneously. However, their ways of building the adjacent matrices are usually heuristic, which makes the models difficult to interpret. Meanwhile, the lack of theoretical explanations hinders the model’s generalization. We introduce a general framework to model dynamic spatial-temporal graph data from the view of graph product. With the power of graph product, we propose a systematical way of constructing the spatial-temporal adjacent graphs, which can not only improve the model’s interpretability but increase the spatial-temporal receptive field. Under the novel framework, the existing methods can be taken as special cases of our model. Extensive experiments on multiple large-scale real-world datasets, NTU-RGB+D60, NTU-RGB+D120, UAV-Human, PEMS03, PEMS04, PEMS07, and PEMS08, demonstrate that the proposed model can generalize to most of the scenarios with a performance improvement in a significant margin compared to the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call