Abstract

Path tracking control for autonomous vehicle using model predictive control (MPC) algorithm maintains maneuverability by calculating a sequence of control input which minimizes a tracking error. The weakness of this method is that the performance of MPC may decrease significantly when the priori prediction model is not accurate. Therefore, it is important to keep the vehicle stable when MPC having model error. This paper uses an on-line model-based reinforcement learning (RL) to decrease the path error by learning unknown parameters and updating a prediction model. To validate, two kinds of path tracking simulation are conducted: one is the comparison the performance between on-line model-based RL and MPC with model error. The other one is about the test when the model used in MPC and the true dynamics, which actually received input, have different tire model. The model-based RL method succeeds to learn unknown tire parameters and maintains their maneuverability in both simulations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call