Abstract

Adaptive traffic signal control systems are deployed to accommodate real-time traffic conditions. Yet travel demand and behavior of the individual vehicles might be overseen by their model-based control algorithms and aggregated input data. Recent development of artificial intelligence, especially the success of deep learning, makes it possible to utilize information of individual vehicles to control the traffic signals. Several pioneering studies developed model-free control algorithms using deep reinforcement learning. However, those studies are limited to isolated intersections and their effectiveness was only evaluated in ideal simulated traffic conditions by hypothetical benchmarks. To fill the gap, this study proposes a network-level decentralized adaptive signal control algorithm using one of the famous deep reinforcement methods, double dueling deep Q network in the multi-agent reinforcement learning framework. The proposed algorithm was evaluated by the real-world coordinated actuated signals in a simulated suburban traffic corridor which emulates the real-field traffic condition. The evaluation results showed that the proposed deep-reinforcement-learning-based algorithm outperforms the benchmark. It is able to reduce 10.27% of the travel time and 46.46% of the total delay.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.