Abstract

The success of intelligent transportation systems relies heavily on accurate traffic prediction, in which how to model the underlying spatial-temporal information from traffic data has come under the spotlight. Most existing frameworks typically utilize separate modules for spatial and temporal correlations modeling. However, this stepwise pattern may limit the effectiveness and efficiency in spatial-temporal feature extraction and cause the overlook of important information in some steps. Furthermore, it is lacking sufficient guidance from prior information while modeling based on a given spatial adjacency graph (e.g., deriving from the geodesic distance or approximate connectivity), and may not reflect the actual interaction between nodes. To overcome those limitations, our paper proposes a spatial-temporal graph synchronous aggregation (STGSA) model to extract the localized and long-term spatial-temporal dependencies simultaneously. Specifically, a tailored graph aggregation method in the vertex domain is designed to extract spatial and temporal features in one graph convolution process. In each STGSA block, we devise a directed temporal correlation graph to represent the localized and long-term dependencies between nodes, and the potential temporal dependence is further fine-tuned by an adaptive weighting operation. Meanwhile, we construct an elaborated spatial adjacency matrix to represent the road sensor graph by considering both physical distance and node similarity in a data-driven manner. Then, inspired by the multi-head attention mechanism which can jointly emphasize information from different representation subspaces, we construct a multi-stream module based on the STGSA blocks to capture global information. It projects the embedding input repeatedly with multiple different channels. Finally, the predicted values are generated by stacking several multi-stream modules. Extensive experiments are constructed on six real-world datasets, and numerical results show that the proposed STGSA model significantly outperforms the benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call