Abstract

In this article, we propose a distributed and collaborative monocular simultaneous localization and mapping system for the multi-robot system in large-scale environments, where monocular vision is the only exteroceptive sensor. Each robot estimates its pose and reconstructs the environment simultaneously using the same monocular simultaneous localization and mapping algorithm. Meanwhile, they share the results of their incremental maps by streaming keyframes through the robot operating system messages and the wireless network. Subsequently, each robot in the group can obtain the global map with high efficiency. To build the collaborative simultaneous localization and mapping architecture, two novel approaches are proposed. One is a robust relocalization method based on active loop closure, and the other is a vision-based multi-robot relative pose estimating and map merging method. The former is used to solve the problem of tracking failures when robots carry out long-term monocular simultaneous localization and mapping in large-scale environments, while the latter uses the appearance-based place recognition method to determine multi-robot relative poses and build the large-scale global map by merging each robot’s local map. Both KITTI data set and our own data set acquired by a handheld camera are used to evaluate the proposed system. Experimental results show that the proposed distributed multi-robot collaborative monocular simultaneous localization and mapping system can be used in both indoor small-scale and outdoor large-scale environments.

Highlights

  • As the monocular camera is much cheaper, physically smaller and lower powered than other vision systems, for example, stereo and RGB-D cameras, it has been widely applied in the fields of computer vision and robotics

  • Since the proposed active loop closure module will try to detect loops within every incoming frame after the relocalization, it is vital to have a good keyframe selection policy, which should not ignore any potential match and not be too open to raise the computation burden. This policy checks four conditions: (1) more than 20 frames must have passed after the latest relocalization, (2) at least 50 points are tracked in the current frame, (3) less than 90% mappoints are tracked in the current frame in comparison to last keyframe and (4) at least 10 frames have passed from last keyframe insertion; local mapping has processed the last keyframe; and a signal is sent to local mapping to finish local bundle adjustment

  • We proposed a robust monocular simultaneous localization and mapping (SLAM) based on image-to-map relocalization and active loop closure to solve the well-known tracking failure problem in the monocular SLAM

Read more

Summary

Introduction

As the monocular camera is much cheaper, physically smaller and lower powered than other vision systems, for example, stereo and RGB-D cameras, it has been widely applied in the fields of computer vision and robotics. We propose an active loop closure approach, which uses the information of the robot/camera pose obtained from the relocalization to navigate robots finding a loop actively and, eliminate the accumulated drift. Another problem of robots carrying out large-scale SLAM is that the computing capacities of a single robot are normally limited. A robust relocalization system based on active loop closure is proposed, where the pose information obtained from the latest relocalization is utilized to navigate the robot to find a loop actively and in turn eliminate the drift caused by the tracking failure. A relative pose calculation and map merging method is proposed, by which the multi-robot collaborative SLAM can be realized without any prior knowledge and large map overlaps

Related work
Experiments
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.