Abstract

This study proposes a learning-based approach to tackle the challenge of joint adaptive routing in stochastic traffic networks with Connected Vehicles (CVs). We introduce a Markov Routing Game (MRG) to model the adaptive routing behavior of all vehicles in such networks, thereby incorporating both competitive route choices and real-time decision-making. We establish the existence of the Nash policy (i.e., optimal joint adaptive routing policy) within the MRG that enables vehicles to adapt optimally to real-time traffic conditions online through efficient communication. To enhance scalability, we innovate with a homogeneity-based mean-field approximation method and, based on that, further develop the Homogeneity-based Mean-Field Deep Reinforcement Learning (HMF-DRL) algorithm to learn the Nash policy within the MRG. Through numerical experiments on the Nguyen–Dupuis network, we demonstrate our algorithm’s ability to efficiently converge and learn the joint adaptive routing policy that significantly enhances traffic network efficiency. Furthermore, our study provides insights into the effects of travel demand, penetration of CVs, and levels of uncertainty on the performance of the joint adaptive routing policy. This paper presents a significant step towards improving network efficiency and reducing the travel time for a majority of vehicles amid uncertain traffic conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.