Abstract

The three main modules of autonomous vehicles, i.e., sensing, decision making, and motion controlling, have been studied separately in most existing works on autonomous driving, which overlook the correlations among these modules, leading to a result of unsatisfactory performance. In this paper, we propose a novel scheme that first tactfully processes the sensing data, then jointly learns and optimizes the decision-making and motion-controlling using reinforcement learning (RL). Specifically, the proposed scheme designs a novel state representation mechanism, where the sensing data goes through the attention layer and the convolutional neural network (CNN) layer sequentially. The attention layer focuses on extracting the most important local information and then CNN layer takes a broad view to comprehensively consider the global information for a better representation. Furthermore, the proposed scheme jointly learns decision-making and motion-controlling, therefore, the relevance of these two modules is implicitly considered, which helps to achieve a better autonomous driving policy. Extensive simulation results show that the proposed scheme is better than classic control methods and some RL methods in terms of safety, velocity, etc. We also demonstrate the respective functions of the attention layer and the CNN layer through ablation studies. Finally, we construct a traffic scene with a real autonomous vehicle, and verified the feasibility of the proposed scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call