Abstract

Accurate traffic status prediction is of great importance to improve the security and reliability of the intelligent transportation system. However, urban traffic status prediction is a very challenging task due to the tight symmetry among the Human–Vehicle–Environment (HVE). The recently proposed spatial–temporal 3D convolutional neural network (ST-3DNet) effectively extracts both spatial and temporal characteristics in HVE, but ignores the essential long-term temporal characteristics and the symmetry of historical data. Therefore, a novel spatial–temporal 3D residual correlation network (ST-3DRCN) is proposed for urban traffic status prediction in this paper. The ST-3DRCN firstly introduces the Pearson correlation coefficient method to extract a high correlation between traffic data. Then, a dynamic spatial feature extraction component is constructed by using 3D convolution combined with residual units to capture dynamic spatial features. After that, based on the idea of long short-term memory (LSTM), a novel architectural unit is proposed to extract dynamic temporal features. Finally, the spatial and temporal features are fused to obtain the final prediction results. Experiments have been performed using two datasets from Chengdu, China (TaxiCD) and California, USA (PEMS-BAY). Taking the root mean square error (RMSE) as the evaluation index, the prediction accuracy of ST-3DRCN on TaxiCD dataset is 21.4%, 21.3%, 11.7%, 10.8%, 4.7%, 3.6% and 2.3% higher than LSTM, convolutional neural network (CNN), 3D-CNN, spatial–temporal residual network (ST-ResNet), spatial–temporal graph convolutional network (ST-GCN), dynamic global-local spatial–temporal network (DGLSTNet), and ST-3DNet, respectively.

Highlights

  • Accepted: 22 December 2021Real-time and accurate traffic information prediction is an important component of the modern intelligent transportation system (ITS) and the advanced passenger information system (ATIS) [1,2]

  • To avoid the problem of model precision degradation due to the large number of convolution layers, we introduce residual units to improve the sensitivity, as shown in where ZR l −1 and ZR l are the input and output of the l-th residual unit, respectively, θ R l is the set of learnable parameters in the l-th residual unit, FR is the residual mapping of the dynamic spatial feature extraction component, and LR is the number of residual layers required for dynamic spatial feature

  • In order to verify the superiority of the proposed ST-3DRCN model, the results from the proposed model will be compared with long short-term memory (LSTM), convolutional neural network (CNN), 3D-CNN, ST-ResNet, STGCN, DGLSTNet and ST-3DNet

Read more

Summary

Introduction

Real-time and accurate traffic information (e.g., traffic status, traffic volume and traffic flow) prediction is an important component of the modern intelligent transportation system (ITS) and the advanced passenger information system (ATIS) [1,2]. Knowing reliable traffic information in advance can help travelers make better traffic routes, guide the transportation department to formulate better traffic management strategies, alleviate traffic congestion and reduce carbon emissions [3,4]. Urban traffic is complex, showing symmetry in the long-term cycle, but has strong sudden and fluctuation in a short time. Reliable traffic information prediction is a very challenging task in the real world, affected by the following complex factors: Dynamic spatial correlation: The spatial characteristics of urban traffic are affected by the global or local factors, and by the historical factors. When the intersection is crowded at 12:00 on Sunday, on the one hand, it is affected by the surge of surrounding vehicles, on the other hand, traffic congestion often occurs at this time

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.