Abstract

As a core part of an autonomous driving system, motion planning plays an important role in safe driving. However, traditional model- and rule-based methods lack the ability to learn interactively with the environment, and learning-based methods still have problems in terms of reliability. To overcome these problems, a hybrid motion planning framework (HMPF) is proposed to improve the performance of motion planning, which is composed of learning-based behavior planning and optimization-based trajectory planning. The behavior planning module adopts a deep reinforcement learning (DRL) algorithm, which can learn from the interaction between the ego vehicle (EV) and other human-driven vehicles (HDVs), and generate behavior decision commands based on environmental perception information. In particular, the intelligent driver model (IDM) calibrated based on real driving data is used to drive HDVs to imitate human driving behavior and interactive response, so as to simulate the bidirectional interaction between EV and HDVs. Meanwhile, trajectory planning module adopts the optimization method based on road Frenet coordinates, which can generate safe and comfortable desired trajectory while reducing the solution dimension of the problem. In addition, trajectory planning also exists as a safety hard constraint of behavior planning to ensure the feasibility of decision instruction. The experimental results demonstrate the effectiveness and feasibility of the proposed HMPF for autonomous driving motion planning in urban mixed traffic flow scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call