Abstract

In this article, we introduce a learning-based vision dynamics approach to nonlinear model predictive control (NMPC) for autonomous vehicles, coined learning-based vision dynamics (LVD) NMPC. LVD-NMPC uses an a-priori process model and a learned vision dynamics model used to calculate the dynamics of the driving scene, the controlled system’s desired state trajectory, and the weighting gains of the quadratic cost function optimized by a constrained predictive controller. The vision system is defined as a deep neural network designed to estimate the dynamics of the image scene. The input is based on historic sequences of sensory observations and vehicle states, integrated by an augmented memory component. Deep Q-learning is used to train the deep network, which once trained can also be used to calculate the desired trajectory of the vehicle. We evaluate LVD-NMPC against a baseline dynamic window approach (DWA) path planning executed using standard NMPC and against the PilotNet neural network. Performance is measured in our simulation environment GridSim, on a real-world 1:8 scaled model car as well as on a real size autonomous test vehicle and the nuScenes computer vision dataset.

Highlights

  • Research in the area of autonomous driving has been boosted in the last decade both by academia and industry

  • Recent years have witnessed a growing trend in applying deep learning techniques to autonomous driving, especially in the areas of End2End learning, as in the methods proposed by Pan et al 3, Fan et al 7 and Bojarski et al 2, as well as in Deep Reinforcement Learning (DRL)

  • As detailed in the Learning a Vision Dynamics Model section, a Deep Neural Network (DNN) is utilized to encode h(·) and on top of h(·), we define a quadratic cost function to be optimized by the constrained Nonlinear Model Predictive Control (NMPC) over discrete time interval [t + 1, t + τo]: s = (z, I), (3)

Read more

Summary

Introduction

Research in the area of autonomous driving has been boosted in the last decade both by academia and industry. A tighter coupling of perception and control was researched in the field of robotic manipulation with the concept of visual servoing, as in the case of the manipulation fault detector og Gu et al 5 This is not the case in autonomous vehicles, were intrinsic dependencies between the different modules of the driving functions are not taken into account. Synergies between data driven and classical control methods have been considered for imitation learning, where steering and acceleration control signals have been calculated in an End2End manner, as by Pan et al 3 Their approach is designed for driving environments with predefined boundaries, without any obstacles present on the driving track. We improve the traditional visual approach by replacing the classical perception-planning pipeline with a learned vision dynamics model.

Related Work
Experiments
Method
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call