Abstract

Intelligent decisions for autonomous lane-changing in vehicles have consistently been a focal point of research in the industry. Traditional lane-changing algorithms, which rely on predefined rules, are ill-suited for the complexities and variabilities of real-world road conditions. In this study, we propose an algorithm that leverages the deep deterministic policy gradient (DDPG) reinforcement learning, integrated with a long short-term memory (LSTM) trajectory prediction model, termed as LSTM-DDPG. In the proposed LSTM-DDPG model, the LSTM state module transforms the observed values from the observation module into a state representation, which then serves as a direct input to the DDPG actor network. Meanwhile, the LSTM prediction module translates the historical trajectory coordinates of nearby vehicles into a word-embedding vector via a fully connected layer, thus providing predicted trajectory information for surrounding vehicles. This integrated LSTM approach considers the potential influence of nearby vehicles on the lane-changing decisions of the subject vehicle. Furthermore, our study emphasizes the safety, efficiency, and comfort of the lane-changing process. Accordingly, we designed a reward and penalty function for the LSTM-DDPG algorithm and determined the optimal network structure parameters. The algorithm was then tested on a simulation platform built with MATLAB/Simulink. Our findings indicate that the LSTM-DDPG model offers a more realistic representation of traffic scenarios involving vehicle interactions. When compared to the traditional DDPG algorithm, the LSTM-DDPG achieved a 7.4% increase in average single-step rewards after normalization, underscoring its superior performance in enhancing lane-changing safety and efficiency. This research provides new ideas for advanced lane-changing decisions in autonomous vehicles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call