Abstract

In advanced freeway traffic management systems, variable speed limit control (VSLC) is frequently discussed as one of the control measures. However, in a mixed traffic environment where connected and automated vehicles (CAVs) and human-driven vehicles coexist, the existing VSLC strategies for multi-lane freeways have two major shortcomings: the lack of precise control at the individual vehicle level, and the implementation of uniform VSLC across all lanes. This paper proposes a novel differential variable speed limit control (DVSLC) strategy based on multi-agent reinforcement learning (MARL) in a mixed traffic environment (abbreviated as MARL-DVSLC). The proposed MARL-DVSLC approach utilized a centralized training with decentralized execution paradigm to learn the joint actions of variable speed limit controllers across all lanes, thereby setting different speed limits for each lane. The reward function takes into account the total time spent (TTS) on freeways to improve traffic mobility. Note that MARL-DVSLC disseminates speed limit information to CAVs via infrastructure-to-vehicle (I2V) communication. The effectiveness of MARL-DVSLC is verified under different simulation scenarios. Moreover, its performance is compared with the feedback-based VSLC method, the DVSLC method based on deep deterministic policy gradient (DDPG) (abbreviated as DDPG-DVSLC), and the no-control case in relation to performance. The results indicate that the proposed strategy can effectively improve traffic efficiency and reduce the spatiotemporal range of traffic congestion at a 30% penetration rate of CAVs. Compared with the suboptimal DDPG-DVSLC method, the proposed strategy can improve TTS by 12.88% with stable traffic demand and 10.24% with fluctuating traffic demand.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call