Abstract

This study develops an optimal, real-time and adaptive control algorithm for helping a Connected and Automated Vehicle (CAV), navigate a freeway lane-drop site (e.g. work zones). The proposed traffic control strategy is based on the Deep Q-Network (DQN) Reinforcement Learning (RL) algorithm, and is designed to determine the driving speed and lane-change maneuvers that would enable the CAV to go through the bottleneck, with the least amount of delay. The DQN RL agent was trained using the microscopic traffic simulator VISSIM, where the learning focused on how the CAV may be able to optimally maneuver the lane drop site while driving as close as possible to the freeway speed limit. VISSIM was also used to compare the performance of the DQN-controlled AV, as opposed to a human-driven vehicle with no intelligent control, in terms of the driving speed or travel time needed to traverse the lane drop site, under a congested, real life-like traffic scenario. The research findings demonstrate the promise of DQN RL in allowing the CAV to intelligently, and optimally navigate, through the lane drop site. Specifically, for the scenario for which the agent was trained, the reduction in the CAV travel time was around 96 percent, compared to the base case. The robustness of the RL agent was further tested on various scenarios different from the training case. For those cases, the mean and standard deviation of the reductions in the travel of the DQN-controlled CAV travel times, compared to the base case, were around 31% and 61%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call