Abstract
In this paper, we propose a federated growing reinforcement learning (FGRL) approach for solving the mapless navigation problem of unmanned ground vehicles (UGVs) facing cluttered unfamiliar obstacles. Deep reinforcement learning (DRL) has the potential to provide adaptive behaviors for autonomous agents through interactive learning, but standard episodic DRL algorithms often struggle with out-of-distribution observations. For navigation tasks, UGVs often encounter unfamiliar situations where novel obstacles differ from prior experience. To address this problem, the proposed FGRL approach enables multiple agents to obtain their individual navigation models in diverse scenarios, and achieves online knowledge aggregation to obtain an adaptive and resilient model that copes with unfamiliar uncertain obstacles. Specifically, during the learning process of navigation tasks, we introduce the growth rate of each agent’s local model based on the performance of consecutive learning rounds. Then, we weight the local model of each agent based on the growth rate to achieve knowledge aggregation in a shared model. We also consider a growth threshold to eliminate the interference of low-quality local models. We carry out extensive simulations to validate the proposed solution, and the results show that our approach can learn resilient behaviors of collision avoidance for UGVs to cope with never encountered and cluttered unfamiliar obstacles.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.