Abstract

Mobile robots have been widely used in hazardous environments to obtain information about surroundings for humans. The volume and efficiency of sensing data can be significantly increased if multiple mobile robots collaboration are exploited. In recent years, deep learning and reinforcement learning techniques have been applied to the field of robotics, which perform well in many tasks including exploration in unknown environments. In this paper, to address the multi-robot exploration problem, a multi-agent deep reinforcement learning (MADRL) based method with the centralized training and decentralized execution (CTDE) architecture was proposed. Extensive experimental results show that our method significantly improves the multi-robot exploration performance in unknown environments. On average, the proposed method uses 12.9% less travel distance and reduces 5.8% overlapping areas when compared with traditional methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call