Abstract

This paper aims to solve the trajectory tracking control problem for an autonomous vehicle based on reinforcement learning methods. Existing reinforcement learning approaches have found limited successful applications on safety-critical tasks in the real world mainly due to two challenges: 1) sim-to-real transfer; 2) closed-loop stability and safety concern. In this paper, we propose an actor-critic-style framework SRL-TR <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$^2$</tex-math> </inline-formula> , in which the RL-based TRajectory TRackers are trained under the safety constraints and then deployed to a full-size vehicle as the lateral controller. To improve the generalization ability, we adopt a light-weight adapter State and Action Space Alignment (SASA) to establish mapping relations between the simulation and reality. To address the safety concern, we leverage an expert strategy to take over the control when the safety constraints are not satisfied. Hence, we conduct safe explorations during the training process and improve the stability of the policy. The experiments show that our agents can achieve one-shot transfer across simulation scenarios and unseen realistic scenarios, finishing the field tests with average running time less than 10 ms/step and average lateral error less than 0.1 m under the speed ranging from 12 km/h to 18 km/h. A video of the field tests is available at https://youtu.be/pjWcN_fV24g.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call