Abstract

In autonomous driving, many learning-based methods for motion planing have been proposed in literature, which can predict motion commands directly from the sensory data of the environment, but these methods can neither predict multiple motion commands, such as steering angle, accelerator and brake, nor balance errors among different motion commands. In this paper, we propose a deep cascaded neural network for predicting multiple motion commands which can be trained in an end-to-end manner for autonomous driving. The proposed deep cascaded neural network consists of a convolutional neural network (CNN) and three long short-term memory (LSTM) units, fed with images from a front-facing camera installed at the vehicle. As the outputs, the proposed model can predict thee motion planning commands simultaneously including steering angle, acceleration, and brake to enable the autonomous driving. In order to balance errors among different motion commands and improve prediction accuracy, we propose a new network training algorithm, where three independent loss functions are designed to separately update the weights in the three LSTMs connected to three motion commands. We conduct comprehensive experiments using the data from a driving simulator and compare our method with the state-of-the-art methods. Simulation results demonstrate the proposed motion planning model achieves better accuracy performance than other models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.