Abstract

Motorsports have become an excellent playground for testing the limits of technology, machines, and human drivers. This paper presents a study that used a professional racing simulator to compare the behavior of human and autonomous drivers under an aggressive driving scenario. A professional simulator offers a close-to-real emulation of underlying physics and vehicle dynamics, as well as a wealth of clean telemetry data. In the first study, the participants' task was to achieve the fastest lap while keeping the car on the track. We grouped the resulting laps according to the performance (lap-time), defining driving behaviors at various performance levels. An extensive analysis of vehicle control features obtained from telemetry data was performed with the goal of predicting the driving performance and informing an autonomous system. In the second part of the study, a state-of-the-art reinforcement learning (RL) algorithm was trained to control the brake, throttle and steering of the simulated racing car. We investigated how the features used to predict driving performance in humans can be used in autonomous driving. Our study investigates human driving patterns with the goal of finding traces that could improve the performance of RL approaches. Conversely, they can also be applied to training (professional) drivers to improve their racing line.

Highlights

  • reinforcement learning (RL) deals with the problem of learning optimal behaviors for the interaction of an agent with an environment by trial and error

  • This paper presents a data-driven approach, collecting data from human drivers, deriving features and developing a predictor model to inform the evaluation of a reinforcement learning (RL) system

  • Since our goal was to gather information on how humans drive and bring this to algorithms, we identified the following research questions: a) What control behaviors lead to a better performance in terms of lap time? b) What can be learned from humans that could eventually make our autonomous driver achieve higher performance in less training time? c) Can an autonomous driver be trained using end-to-end reinforcement learning to perform as the highest performance human? What level of performance can be achieved?

Read more

Summary

Introduction

RL deals with the problem of learning optimal behaviors for the interaction of an agent with an environment by trial and error. The agent observes some state in the environment and chooses an action, the result of which is another state and a reward obtained from the environment. The agent attempts to learn behaviors in order to maximize the accumulated reward obtained from the environment. The interaction of the agent with the environment takes place in discrete time steps t. At each step, starting from a state st, the agent executes an action at and receives a return from a state is defined as reward rt and a new state st+1 the sum of discounted future from the environment.

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.