Abstract

Autonomous vehicle driving systems face the challenge of providing safe, feasible and human-like driving policy quickly and efficiently. The traditional approach usually involves a search or optimization-based planning followed by a model-based controller. This may prove to be inadequate in some driving scenarios due to disturbance, uncertainties and limited computation time. The more recent end-to-end approaches aim at overcoming these issues by learning a policy to map from sensor data to controls using machine learning techniques. Although being attractive for its simplicity, they also show some drawbacks such as sample inefficiency and difficulties in validation and interpretability. This work presents an approach that attempts to exploit both worlds, combining machine learning-based and model-based control into an imitation learning framework that mimic expert driving behavior while obtaining safe and smooth driving. The dataset is generated from high-fidelity simulations of vehicle dynamics and model predictive control (MPC). A smooth spline-based motion planning represents the policy provided by a constrained neural network exploiting the convex hull property of B-splines. The policy network is trained with few dataset aggregations coming from its induced distribution of states. The learned policy is used as guidance for model-based feedback control and tested on a 15DOF high fidelity vehicle model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call