Abstract

Reinforcement Learning has emerged as a significant component of Machine Learning in the domain of highly automated driving, facilitating various tasks ranging from high-level navigation to control tasks such as trajectory tracking and lane keeping. However, the agent’s action choice during training is often constrained by a balance between exploitation and exploration, which can impede effective learning, especially in environments with sparse rewards. To address this challenge, researchers have explored combining RL with sampling-based exploration methods such as Rapidly-exploring Random Trees to aid in exploration. This paper investigates the effectiveness of classic exploration strategies in RL algorithms, particularly focusing on their ability to cover the state space and provide a quality experience pool for learning agents. The study centers on the lane-keeping problem of a dynamic vehicle model handled by RL, examining a scenario where reward shaping is omitted, leading to sparse rewards. The paper demonstrates how classic exploration techniques often cover only a small portion of the state space, hindering learning. By leveraging RRT to broaden the experience pool, the agent can learn a better policy, as exemplified by the dynamic vehicle model’s lane-following problem.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.