Abstract

As mobile robots are becoming more and more widely used, it is of great significance to design an efficient path planning method for multi-robot systems (MRS) that can adapt to complex and unknown environments. In this paper, we present a deep reinforcement learning (DRL) based method to navigate the MRS with collision avoidance in unknown dynamic environments, which consists of centralized learning and decentralized executing paradigm. The proposed policy maps original laser information into robot control commands without constructing global maps. The learned policy is tested in Gazebo environments with three robot systems, which shows the effective performance in terms of success rate, extra time rate, and formation maintenance rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call