Abstract

This article proposes a unique active relative localization mechanism for multi-agent simultaneous localization and mapping, in which an agent to be observed is considered as a task, and the others who want to assist that agent will perform that task by relative observation. A task allocation algorithm based on deep reinforcement learning is proposed for this mechanism. Each agent can choose whether to localize other agents or to continue independent simultaneous localization and mapping on its own initiative. By this way, the process of each agent simultaneous localization and mapping will be interacted by the collaboration. Firstly, a unique observation function which models the whole multi-agent system is obtained based on ORBSLAM. Secondly, a novel type of Deep Q Network called multi-agent systemDeep Q Network (MAS-DQN) is deployed to learn correspondence between Q value and state–action pair, abstract representation of agents in multi-agent system is learned in the process of collaboration among agents. Finally, each agent must act with a certain degree of freedom according to MAS-DQN. The simulation results of comparative experiments prove that this mechanism improves the efficiency of cooperation in the process of multi-agent simultaneous localization and mapping.

Highlights

  • A critical task of a mobile robot is to determine its pose under the given environment map, which is the basis of other type of tasks

  • When the mobile robots enter an unknown part of environment, they need to construct the map (3-D point cloud map, topological map, 2-D map) to satisfy their tasks through its own sensors, and at the same time determine their own localization in the map, this is called simultaneous localization and mapping (SLAM) problem

  • In order to build accurate maps, it is necessary for the robot to have an accurate estimation of its pose, vice versa, localization requires established high-quality maps, it is the difficulty of SLAM

Read more

Summary

Introduction

A critical task of a mobile robot is to determine its pose (position and orientation) under the given environment map, which is the basis of other type of tasks. The difficulty of online multi-agent SLAM is how to cooperate in the best way under the condition that each agent has different attributes with different states This issue is prominent in the relative localization between robots. The localization ability of wheeled robot is stronger than that of unmanned aerial vehicle (UAV) because of the odometer, but the pose of UAV is more abundant, which leads to its strong mapping ability If there are these two kinds of robots in the system at the same time, an appropriate collaboration model will make full use of the advantages of both.[9] This type of relative localization is controlled manually. The complete algorithm of the approach is shown

Related work
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call