Abstract
Navigation or route guidance systems are designed to provide drivers with real-time travel information and the associated recommended routes for their trips. Classical route choice models typically rely on utility theory to represent drivers’ route choice behavior. Such choices, however, may not be optimal from both the individual and the system perspectives. This is simply due to the fact that drivers usually have imperfect knowledge about the time-varying traffic conditions. In this article, we explore and propose a new model-free deep reinforcement learning (DRL) approach to solving the adaptive route guidance problem based on microsimulation. The proposed approach consists of three interconnected algorithms, including a network edge labeling algorithm, a routing plan identification algorithm, and an adaptive route guidance algorithm. Simulation experiments on both a toy network and a real-world network of Suzhou, China, are performed to demonstrate the effectiveness of the proposed approach in terms of guiding a single vehicle as well as multiple vehicles through complex traffic environments. Comparative results confirm that the DRL approach outperforms the traditional shortest path method by further reducing the average travel time in the network.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.