Abstract

The implementation of autonomous driving is inseparable from developing intelligent driving decision-making models, which are facing high scene complexity, poor decision-making coupling, and the inability to guarantee decision-making safety. This paper starts with the priority and logic of lane change and car-following decision-making, considering driving efficiency, safety, and comfort, then constructs a double-layer decision-making model. This paper uses two deep reinforcement learning algorithms for the upper and lower layers to process large-scale mixed state space and ensure the composite action output of lane-changing decisions and car-following decisions. In the upper layer model, we use the D3QN algorithm to distinguish the potential value of the environment and the value of selecting lane-changing actions when making lane-changing decisions. Different from the traditional mechanisms that only use negative rewards, the lane changing benefit function and dangerous action shielding mechanism are used to eliminate collisions. DDPG algorithm is adopted in the lower layer model to process car-following decisions and output continuous vehicle speed control. Besides, coupled training is taken for the two algorithms to improve the coordination of the double-layer model. This paper selected mixed standard driving cycle conditions to build a highly complex training environment and used NGSIM data to reconstruct scenes to test our model. Simulations in SUMO are presented that the double-layer model can increase the driving speed of the original data by 23.99%, which has higher effectiveness than other models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call