Abstract

In this paper, we focus on the problem of highway merge via parallel-type on-ramp for autonomous vehicles (AVs) in a decentralized non-cooperative way. This problem is challenging because of the highly dynamic and complex road environments. A deep reinforcement learning-based approach is proposed. The kernel of this approach is a Deep Q-Network (DQN) that takes dynamic traffic state as input and outputs actions including longitudinal acceleration (or deceleration) and lane merge. The total reward for this on-ramp merge problem consists of three parts, which are the merge success reward, the merge safety reward, and the merge efficiency reward. For model training and testing, we construct a highway on-ramp merging simulation experiments with realistic driving parameters. The experimental results show that the proposed approach can make reasonable merging decisions based on the observation of the traffic environment. We also compare our approach with a state-of-the-art approach and the superior performance of our approach is demonstrated by making challenging merging decisions in complex highway parallel-type on-ramp merging scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call