Abstract

As the fundamental part of other Intelligent Transportation Systems (ITS) applications, short-term traffic volume prediction plays an important role in various intelligent transportation tasks, such as traffic management, traffic signal control and route planning. Although Neural-network-based traffic prediction methods can produce good results, most of the models can’t be explained in an intuitive way. In this paper, we not only proposed a model that increase the short-term prediction accuracy of the traffic volume, but also improved the interpretability of the model by analyzing the internal attention score learnt by the model. we propose a spatiotemporal attention mechanism-based multistep traffic volume prediction model (SAMM). Inside the model, an LSTM-based Encoder-Decoder network with a hybrid attention mechanism is introduced, which consists of spatial attention and temporal attention. In the first level, the local and global spatial attention mechanisms considering the micro traffic evolution and macro pattern similarity, respectively, are applied to capture and amplify the features from the highly correlated entrance stations. In the second level, a temporal attention mechanism is employed to amplify the features from the time steps captured as contributing more to the future exit volume. Considering the time-dependent characteristics and the continuity of the recent evolutionary traffic volume trend, the timestamp features and historical exit volume series of target stations are included as the external inputs. An experiment is conducted using data from the highway toll collection system of Guangdong Province, China. By extracting and analyzing the weights of the spatial and temporal attention layers, the contributions of the intermediate parameters are revealed and explained with knowledge acquired by historical statistics. The results show that the proposed model outperforms the state-of-the-art model by 29.51% in terms of MSE, 13.93% in terms of MAE, and 5.69% in terms of MAPE. The effectiveness of the Encoder-Decoder framework and the attention mechanism are also verified.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.