Abstract

Navigation or route guidance systems are designed to provide drivers with real-time travel information and the associated recommended routes for their trips. Classical route choice models typically rely on utility theory to represent drivers’ route choice behavior. Such choices, however, may not be optimal from both the individual and the system perspectives. This is simply due to the fact that drivers usually have imperfect knowledge about the time-varying traffic conditions. In this article, we explore and propose a new model-free deep reinforcement learning (DRL) approach to solving the adaptive route guidance problem based on microsimulation. The proposed approach consists of three interconnected algorithms, including a network edge labeling algorithm, a routing plan identification algorithm, and an adaptive route guidance algorithm. Simulation experiments on both a toy network and a real-world network of Suzhou, China, are performed to demonstrate the effectiveness of the proposed approach in terms of guiding a single vehicle as well as multiple vehicles through complex traffic environments. Comparative results confirm that the DRL approach outperforms the traditional shortest path method by further reducing the average travel time in the network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call