Abstract

Autonomous lane changing, as a key module to realize high-level automatic driving, has important practical significance for improving the driving safety, comfort and commuting efficiency of vehicles. Traditional controllers have disadvantages such as weak scene adaptability and difficulty in balancing multi-objective optimization. In this paper, combined with the excellent self-learning ability of reinforcement learning, an interactive model predictive control algorithm is designed to realize the tracking control of the lane change trajectory. At the same time, two typical scenarios are verified by PreScan and Simulink, and the results show that the control algorithm can significantly improve the tracking accuracy and stability of the lane change trajectory.

Highlights

  • RLMPC trajectory tracking control algorithm designTracking control of trajectory is the key to realize autonomous lane change

  • Reinforcement learning has the ability of interactive learning with the external environment, which makes the Model Predictive Control (MPC) prediction model based on reinforcement learning more accurate prediction effects and has the ability to reflect the external objective environment in real time

  • In formula (16), when the adjustment decision value is small, the feedback correction adjustment amount is small, so the adjustment time of RLMPC is longer, but the adjustment process is relatively stable; when the adjustment decision value is large, the feedback correction adjustment amount is larger, The adjustment time is short, but it is prone to unstable adjustment; when the adjustment decision value is 0, the controller does not make adjustments, and its function is equivalent to the traditional MPC controller

Read more

Summary

RLMPC trajectory tracking control algorithm design

Tracking control of trajectory is the key to realize autonomous lane change. MPC has natural advantages in dealing with multiple constraints, but it has the disadvantage of weak scene adaptability. This section is based on MPC, combined with reinforcement learning to improve its algorithm

Overall structure
Reference trajectory design
Rolling optimization module design
S i 1
Predictive model design
Feedback correction module design
Verification platform and parameter setting
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call