Abstract

This work focuses on using Deep Reinforcement Learning (DRL) to control an autonomous vehicle in the hyper-realistic urban simulation LGSVL. Classical control systems such as MPC maneuver vehicles based on a given trajectory, current velocity, position, distances, and more. Our approach does not pass this information to the DRL agent but only images provided by the camera. Current DRL efforts also exploit similar approaches for autonomous driving, but they are only suitable for small, simple tasks using simple simulations. Our approach consists of two differently trained neural networks (NN), a perceptual NN for representation learning and an actor NN for selecting the correct action. The perception NN will be trained via representation and self-supervised learning to strengthen our DRL agent's understanding of the scene. It can recognize temporal information and the dynamics of a complex environment. This work shows the importance of decoupling the perception and decision (actor) model for autonomous driving. All in all, we could drive autonomously in a hyper-realistic urban simulation using our modular DRL framework. Moreover, our approach also provides a solution for other similar tasks in the field of robotics based on images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call