Abstract

The trajectory planning of autonomous vehicles requires making safe sequential decisions instantaneously. The most significant challenge is the uncertainties brought by complex interactions with other road users in diverse driving scenarios. Previous rule-based solutions lack generalization and only suit for limited simple environments. Moreover, another approach generates feasible action sequences based on motion planning, which ignores the interactions between vehicle participants. This paper proposes a hierarchical framework based on reinforcement learning for autonomous driving. Instead of directly mapping the sensor information to the low-level control signal, we first choose a desired state and generate a target trajectory in a Frenet frame according to the desired state. Then a low-level controller is used to track the target trajectory. A reinforcement learning algorithm combining planning and learning is introduced to choose the desired state. This trajectory planning framework is evaluated on a lane switching case in a simulated environment. The simulation results demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call