Abstract

As a primary countermeasure to mitigate traffic congestion and air pollution, promoting public transit has become a global census. Designing a robust and reliable bus timetable is a pivotal step to increase ridership and reduce operating cost for transit authorities. However, most previous studies on bus timetabling rely on historical passenger count and travel time data to generate static schedules, which often yield biased results in these uncertain scenarios, such as demand surge or adverse weather. In addition, acquiring real-time passenger origin/destination from a limited number of running buses is not feasible. This article considers the multiline dynamic bus timetable optimization problem as a Markov decision process model to address the aforementioned issues, and proposes a multiagent deep reinforcement learning framework to ensure effective learning from the imperfect-information game, where the passenger demand and traffic condition are not always known in advance. Moreover, a distributed reinforcement learning algorithm is applied to overcome the limitation of high computational cost and low efficiency. A case study of multiple bus lines in Beijing, China, confirms the effectiveness and efficiency of the proposed model. The results demonstrate that our method outperforms heuristic and state-of-the-art reinforcement learning algorithms by reducing 20.30% of operating and passenger costs compared with actual timetables.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.