Abstract

This paper proposes a model-data-driven control method for a human-leading vehicle platoon, comprising a human-driven vehicle (HDV) as the leader and connected automated vehicles (CAVs) as followers. Initially, a representative trajectory of HDVs is constructed using principal component analysis and the K-means clustering algorithm, which is utilized as training dataset. Subsequently, we propose a novel platooning method, named deep reinforcement learning with model-based guidance (DRLMG). The output of model predictive control (MPC) is integrated into the input state and reward function of the deep reinforcement learning (DRL) algorithm. The DRL algorithm benefits from guidance provided by MPC, leading to more optimal decision-making. To ensure safety and stability, a safety filter is designed using control barrier function and the control Lyapunov function. Simulation experiments with real-world driving data show that DRLMG outperforms MPC, reducing speed error, spacing error, and acceleration change rate by 17.9%, 53.7%, and 47.1%, respectively. In comparison to pure DRL, DRLMG increases spacing error by 6.5% but reduces speed error by 15.4% and acceleration change rate by 14.3%. The proposed method enhances DRL’s generalization capability, dampens traffic oscillations caused by the leading HDV, and guarantees driving safety and stability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call