Abstract

Deep Reinforcement Learning (RL) algorithms are widely being used in autonomous driving due to their ability to cope with unseen environments. However, in a complex domain like autonomous driving, these algorithms need to explore the environment enough to be able to converge. Therefore, these algorithms are faced with the problem of long training times and large amounts of data. In addition, using deep RL algorithms in areas that safety is an important factor such as autonomous driving can lead to a safety issue since we cannot leave the car driving in the street unattended. In this research, we tested two methods for the purpose of reducing the training time. First, we pre-trained Soft Actor-Critic (SAC) with Learning from Demonstrations (LfD) to find out if pre-training can reduce the training time of the SAC algorithm. Then, an online end-to-end combination method of SAC, LfD, and Learning from Interventions (LfI) is proposed to train an agent (dubbed Online Virtual Training). Both scenarios were implemented and tested in an inverted-pendulum task in OpenAI gym and autonomous driving in the Carla simulator. The results showed a dramatic reduction in the training time and a significant increase in gaining rewards for Online LfD (33%) and Online Virtual training (36 %) as compare to the baseline SAC. The proposed approach is expected to be effective in daily commute scenarios for autonomous driving.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.