Abstract

Due to the inherent uncertainties of the bus system, bus bunching remains a challenging problem that degrades bus service reliability and causes passenger dissatisfaction. This paper introduces a novel deep reinforcement learning framework specifically designed to address the bus bunching problem by implementing dynamic holding control in a multi-agent system. We formulate the bus holding problem as a decentralized, partially observable Markov decision process and develop an event-driven simulator to emulate real-world bus operations. An approach based on deep Q-learning with parameter sharing is proposed to train the agents. We conducted extensive experiments to evaluate the proposed framework against multiple baseline strategies. The proposed approach has proven to be adaptable to the uncertainties in bus operations. The results highlight the significant advantages of the deep reinforcement learning framework across various performance metrics, including reduced passenger waiting time, more balanced bus load distribution, decreased occupancy variability, and shorter travel time. The findings demonstrate the potential of the proposed method for practical application in real-world bus systems, offering promising solutions to mitigate bus bunching and enhance overall service quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.