Abstract
Reinforcement learning (RL), given its adaptability and generality, has great potential to optimize online traffic signal control strategies. Although studies have proposed various RL-based signal controllers and validated them offline, very few examine the robustness of the trained RL-based controllers when deployed in a dynamic traffic environment. This paper proposed a multi-agent reinforcement learning algorithm for traffic signal control and developed a general multi-agent optimization simulation tool to evaluate different signal control methods. A transfer learning technique is applied to test the robustness of the proposed algorithm and traditional control approaches under different traffic scenarios, including stochastic traffic flow, varying traffic volume, and uncertain sensor data. The experimental results show that the proposed RL-based control method is robust under stochastic traffic flow and variation traffic demand patterns, and it outperforms the fixed-time and vehicle-actuated methods. However, it is unstable in the case of highly noisy sensor data. Also, the trained RL-based controller can continuously learn online and improve its performance by interacting with the dynamic traffic environment, especially when the traffic is congested, and the sensor has noisy observations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.