Abstract

In the community of artificial intelligence, re-searchers have devoted much effort to the application of deep reinforcement learning algorithms for autonomous driving. Under deep reinforcement learning framework, it is important for the racing car agent to interact with its external environment to accumulate enough driving experience. However, the inter-action process is usually inefficient, risky and time-consuming. Furthermore, it is a common problem in relevant studies that brake policy is difficult to master. In this paper, we adopt some priori knowledge about vehicle dynamics to design the brake force and update it to the actor-critic network by soft-learning strategy. In addition, some effective strategies are developed to improve the training efficiency and control performance. The Open Racing Car Simulator(TORCS) is adopted to evaluate our algorithm. The simulation results show the effectiveness of our proposed algorithm with better learning efficiency, robustness and generalization performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call