Abstract
With the refinement of the urban transportation network, more and more passengers choose the combined mode. To provide better inter-trip services, it is necessary to integrate and forecast the passenger flow of multi-level rail transit network to improve the connectivity of different transport modes. The difficulty of multi-level rail transit passenger flow prediction lies in the complexity of the spatiotemporal characteristics of the data, the different characteristics of passenger flow composition, and the difficulty of research. At present, most of the research focuses on one mode of transportation or the passenger flow within the city, while the comprehensive analysis of passenger flow under various modes of transportation is less. This study takes the key nodes of the multi-level rail transit railway hub as the research object, establishes a multi-task learning model, and forecasts the short-term passenger flow of rail transit by combining the trunk railway, intercity rail transit and subway. Different from the existing research, the model introduces convolution layer and multi-head attention mechanism to improve and optimize the Transformer multi-task learning framework, trains and processes the data of trunk railway, intercity railway, and subway as different tasks, and considers the correlation of passenger flow of trunk railway, intercity railway, and subway in the prediction. At the same time, a new residual network structure is introduced to solve the problems of over-fitting, gradient disappearance, and gradient explosion in the training process. Taking the large comprehensive transportation hub in Guangzhou metropolitan area as an example, the proposed multi-task learning model is evaluated. The improved Transformer has the highest prediction accuracy (Average prediction accuracy of passenger flow of three traffic modes) 88.569%, and others methods HA, FC-LSTM and STGCN are 81.579%, 82.230% and 81.761%, respectively. The results show that the proposed multi-task learning model has better prediction performance than the existing models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.