Abstract

Freeway bottlenecks such as on-ramp merging areas account for about 40% of recurring freeway congestion. It is generally agreed that building more roads and adding more lanes to existing infrastructure does not solve the congestion problem, and so dynamic traffic control measures offer a more cost-effective alternative. Ramp meters, traffic signal devices that regulate traffic flow entering freeways, are among the most effective measures to mitigate congestion at on-ramp merging areas on freeways. The confluence of deep reinforcement learning (RL) and connectivity provides a possible solution to advance ramp meter signal control. Deep RL is a group of machine-learning methods that enables an agent learning from the environment to improve its performance. In this study, three deep RL methods-proximal policy optimization (PPO), Ape-X deep Q-network (DQN), and asynchronous advantage actor-critic agents (A3C)-are explored for ramp meter signal control to maximize vehicle speed and traffic throughput, as well as to minimize energy consumption and emissions at freeway on-ramp merging areas in a connected environment. The low computational requirement and scalability of deep RL for deployment make it a powerful optimization tool for time-sensitive applications such as ramp meter signal control. The results of this study show that deep RL methods yield superior performance to both a fixed-time controller and ALINE A, a state-of-the-art feedback controller.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.