Abstract

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.

Highlights

  • In recent decades, autonomous robots widely used in various fields such as urban warfare, rescue after disaster, autonomous driving and space robotics have attracted more and more attention

  • We quantitatively evaluate the influence of the shape, quantity, speed, and distribution of the moving objects on the performance of LiDAR-based Simultaneous Localization and Mapping (SLAM), towards providing theoretical background for the further proposed method of eliminating moving objects for SLAM

  • From the simulated and real-world experiments, we can clearly see that benefiting from the well-designed sensor fusion and data association, the proposed method can effectively improve the performance of the LiDAR-based SLAM on accuracy and robustness even in highly dynamic environments, without increasing computation significantly

Read more

Summary

Introduction

Autonomous robots widely used in various fields such as urban warfare, rescue after disaster, autonomous driving and space robotics have attracted more and more attention. Simultaneous Localization and Mapping (SLAM) is a prerequisite for many robotic applications, which involves a system that simultaneously completes the positioning of the mobile robot itself and the map construction of the surrounding environment without any prior information [2]. SLAM has been vigorously pursued in the mobile robot research field and various excellent algorithms have emerged. They can be divided into vision-based SLAM and LiDAR-based SLAM. LiDAR-based SLAM is widely used because it can acquire accurate and reliable distance information from the surrounding environment for state estimation. LiDAR-SLAM can be further divided into filter-based and optimization-based method, such as Gmapping [7], Hector SLAM [8], LOAM [9] and Cartographer [10] etc

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call