Abstract

AbstractThe use of cooperative multirobot teams in urban search and rescue (USAR) environments is a challenging yet promising research area. For multirobot teams working in USAR missions, the objective is to have the rescue robots work effectively together to coordinate task allocation and task execution between different team members in order to minimize the overall exploration time needed to search disaster scenes and to find as many victims as possible. This paper presents the development of a multirobot cooperative learning approach for a hierarchical reinforcement learning (HRL) based semiautonomous control architecture in order to enable a robot team to learn cooperatively to explore and identify victims in cluttered USAR scenes. The proposed cooperative learning approach allows effective task allocation among the multirobot team and efficient execution of the allocated tasks in order to improve the overall team performance. Human intervention is requested by the robots when it is determined that they cannot effectively execute an allocated task autonomously. Thus, the robot team is able to make cooperative decisions regarding task allocation between different team members (robots and human operators) and to share experiences on execution of the allocated tasks. Extensive results verify the effectiveness of the proposed HRL‐based methodology for multi‐robot cooperative exploration and victim identification in USAR‐like scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call