Abstract

This paper addresses the trajectory planning problem for autonomous vehicles in traffic. We build a stochastic Markov decision process (MDP) model to represent the behaviors of the vehicles. This MDP model takes into account the road geometry and is able to reproduce more diverse driving styles. We introduce a new concept, namely, the “dynamic cell,” to dynamically modify the state of the traffic according to different vehicle velocities, driver intents (signals), and the sizes of the surrounding vehicles (i.e., truck, sedan, and so on). We then use Bezier curves to plan smooth paths for lane switching. The maximum curvature of the path is enforced via certain design parameters. By designing suitable reward functions, different desired driving styles of the intelligent vehicle can be achieved by solving a reinforcement learning problem. The desired driving behaviors (i.e., autonomous highway overtaking) are demonstrated with an in-house developed traffic simulator.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call