Abstract

Connected and autonomous vehicles (CAVs) can be leveraged to enable cooperative platooning control to alleviate traffic oscillations. However, prior to a pure CAV environment, CAVs and human-driven vehicles (HDVs) will coexist on roads, creating a mixed-flow traffic environment. Mixed-flow traffic introduces key challenges for CAV operations due to potential lane changes by HDVs in adjacent lanes, which can cause stop-and-go waves and traffic oscillations. An understanding of the interactions between CAVs and HDVs in the lane-change process can be leveraged to use CAVs to proactively preclude disruptive lane changes by HDVs. This study proposes a deep reinforcement learning-based proactive longitudinal control strategy (PLCS) for CAVs to counteract disruptive HDV lane-change behaviors that can induce disturbances, and to preserve the smoothness of traffic flow in the CAV platooning control process. In it, a Transformer-based lane-change traffic condition predictor is constructed to predict whether an HDV will likely perform a disruptive lane change under the ambient traffic conditions. If no disruptive lane change is predicted, an extended intelligent driver model is activated for the CAV to perform smooth car-following behavior under cooperative CAV platooning control. If a disruptive lane change is predicted, a rainbow deep Q-network (RDQN)-based lane-change preclusion model is proposed through which the CAV can alter the lane-change traffic condition to preclude the HDV’s lane change. Results from numerical experiments suggest that a CAV controlled by the PLCS is effective in reducing disruptive lane-change maneuvers by an HDV in its vicinity, and can improve string stability performance in mixed-flow traffic. Further, the effectiveness of the PLCS is illustrated under different lane-change scenarios, CAV control setups, and HDV driver types.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call