Abstract

The reinforcement learning (RL) with deep neural network, as a data-driven approach, is promising for adaptive traffic signal control (ATSC) in traffic scenarios. The majority of the existing studies focus on designing efficient agents and policy optimization for ATSC, but neglect to observe more detailed states of the environment. In this paper, an adaptive traffic signal control strategy, named as A2C RTQL, is proposed for scheduling the traffic signal in an intersection, by combining the real-time lane-based queue lengths with deep RL agent. First, the Lighthill-Whitham-Richards (LWR) shockwave theory is employed for obtaining the real-time queue lengths in each lane. After that, by defining the obtained queue lengths as the inputs, A2C RTQL strategy is designed for traffic signal control based on the advanced actor-critic (A2C) agent, where the lanes are divided into multiple parallel environments based on the phases of traffic signal. Simulation results demonstrate the optimality and efficiency of the proposed strategy compared with other methods in SUMO under simulated peak-hour traffic dynamics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call