Abstract

AbstractThe bus bunching problem caused by the uncertain interstation travel time and passenger demand rate is a critical issue that impairs transit efficiency. Most current bus control studies focus on single or combined strategies while ignoring the bus system's real‐time environmental information. This paper proposed a distributed deep reinforcement learning (DRL)‐based generic bus dynamic control method to solve the bus bunching problem by maintaining the schedule adherence, headway regularity, and achieving the consensus in the multiagent system. This study built a bus system that utilizes the bus historical and traffic information by incorporating these characteristics into the environment. After that, a distributed DRL‐based bus dynamic control strategy is developed based on the bus system, enabling each bus to adjust its motion by any generic method utilizing the weighted downstream buses' information. Regarding the training process, a distributed proximal policy optimization algorithm is adopted for improving the converging performance. Simulated experiments are conducted to verify the control performance, robustness, feasibility, resilience, and generalization capability, which shows that our strategy can significantly reduce the schedule and headway deviations, prevent the accumulation of deviation downstream, and avoid bus bunching.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call