Abstract
The advancement of autonomous driving technology is becoming increasingly vital in the modern technological landscape, where it promises notable enhancements in safety, efficiency, traffic management, and energy use. Despite these benefits, conventional deep reinforcement learning algorithms often struggle to effectively navigate complex driving environments. To tackle this challenge, we propose a novel network called DynamicNoise, which was designed to significantly boost the algorithmic performance by introducing noise into the deep Q-network (DQN) and double deep Q-network (DDQN). Drawing inspiration from the NoiseNet architecture, DynamicNoise uses stochastic perturbations to improve the exploration capabilities of these models, thus leading to more robust learning outcomes. Our experiments demonstrated a 57.25% improvement in the navigation effectiveness within a 2D experimental setting. Moreover, by integrating noise into the action selection and fully connected layers of the soft actor–critic (SAC) model in the more complex 3D CARLA simulation environment, our approach achieved an 18.9% performance gain, which substantially surpassed the traditional methods. These results confirmed that the DynamicNoise network significantly enhanced the performance of autonomous driving systems across various simulated environments, regardless of their dimensionality and complexity, by improving their exploration capabilities rather than just their efficiency.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.