Abstract
Connected vehicle-based adaptive traffic signal control requires certain market penetration rates (MPRs) to be effective, usually exceeding 10%. Cooperative perception based on connected and automated vehicles (CAVs) can effectively improve overall data collection efficiency and reduce required MPR. However, the distribution of observed vehicles under cooperative perception is highly skewed and imbalanced, especially under very low CAV MPRs (e.g., 1%). To address this challenge, this paper proposes a novel deep reinforcement learning-based adaptive traffic signal control (RL-TSC) method that integrates a traffic flow model, known as the cell transmission model (CTM), denoted as CAVLight. Traffic states estimated from the CTM are integrated with the data collected from the cooperative perception environment to update the states in the CAVLight model. The design of reward function aims for reducing total vehicle delays and stabilizing agent behaviors. Extensive numerical experiments under a real-world intersection with varying traffic demand levels and CAV MPRs are conducted to compare the performance of CAVLight and other benchmark algorithms, including a fixed-time controller, an actuated controller, the max pressure model, and an optimization-based adaptive TSC. Results demonstrate the superiority of CAVLight in performance and generalizability over benchmarks, especially under 1% CAV MPR scenario with high traffic demands. The influence of CTM integration on CAVLight is further explored through RL agent policy visualization and sensitivity analysis in CTM parameters and CAV perception capabilities (i.e., detection range and detection accuracy).
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have