Abstract

We introduce a software framework for real-time multi-robot collaborative SLAM. Rather than building a complete SLAM system, our framework is designed to enable collaborative mapping for existing (single-robot) SLAM systems in a convenient fashion. The framework aggregates local pose graphs obtained from its multiple robots into a global pose graph, which it then feeds back to the robots to increase their mapping and localization effectiveness. The framework can potentially work with various SLAM algorithms, as long as they provide a pose graph with an image associated with each node, and absolute scaling. The merging of pose graphs is purely visual-based and does not require well-defined initial robot positions nor environment markers. To handle network delays, we propose a graph correction scheme that avoids using mutexes (and thus avoids modifying the existing SLAM system) by assuming local graph consistency. Furthermore, we propose a simple image feature filtering method that uses an associated depth image to filter image features unsuitable for scene recognition. We demonstrate the framework's functionality with several interior datasets that we have collected using three robots.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.