Abstract
Traffic signal control is of great importance to the urban transportation systems and public travel, yet it becomes challenging because of two essential factors. First, spatial–temporal correlations are crucial to an intersection scenario. However, existing works have either considered only one of these features or a simple fusion of spatial and temporal information without adequately exploiting the potential correlations. Second, some works using graph neural network treats static graph nodes among adjacent intersections, ignoring the fact that intersection traffic is changing dynamically. These dynamically changing characteristics of an intersection are likewise significant for traffic signal prediction. If these problems are not resolved, the traffic pressure will increase and people’s time will be wasted.To resolve these challenges, we put forward a novel meta-learning spatial–temporal graph attention network (MetaSTGAT) for adaptive traffic signal control. Specifically, we design a graph neural framework with a graph attention network (GAT) and long short-term memory (LSTM) network to obtain spatial and temporal information. The spatial–temporal features are elaborately merged to improve its performance. Besides, to adapt the graph network to the dynamic traffic flow, i.e., the dynamics of the nodes, we propose a meta-learning method for graph neural network’s weights generation. This dynamic weight generation process captures the dynamic changes of graph nodes, and thus the dynamic changes of intersections. In this way, the neighboring nodes in the graph network get new weights in advance by additional features when they influence each other. Comprehensive experiments performed in the multi-intersection scenario on synthetic and real-world datasets demonstrate the effectiveness of MetaSTGAT against other state-of-the-art methods. Our method reduces travel time by 12.23%, 19.30%, 13.84%, 10.91%, 8.24%, and 8.74% over the graph-level method CoLight on four synthetic datasets and two real-world datasets, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.