Abstract
Adaptive traffic signal control systems are deployed to accommodate real-time traffic conditions. Yet travel demand and behavior of the individual vehicles might be overseen by their model-based control algorithms and aggregated input data. Recent development of artificial intelligence, especially the success of deep learning, makes it possible to utilize information of individual vehicles to control the traffic signals. Several pioneering studies developed model-free control algorithms using deep reinforcement learning. However, those studies are limited to isolated intersections and their effectiveness was only evaluated in ideal simulated traffic conditions by hypothetical benchmarks. To fill the gap, this study proposes a network-level decentralized adaptive signal control algorithm using one of the famous deep reinforcement methods, double dueling deep Q network in the multi-agent reinforcement learning framework. The proposed algorithm was evaluated by the real-world coordinated actuated signals in a simulated suburban traffic corridor which emulates the real-field traffic condition. The evaluation results showed that the proposed deep-reinforcement-learning-based algorithm outperforms the benchmark. It is able to reduce 10.27% of the travel time and 46.46% of the total delay.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have