Abstract

Most of the existing simultaneous localization and mapping (SLAM) methods are based on the static environment assumption. The presence of moving objects in the scene will lead to much uncertainty in SLAM results, which also hinders the loop-closure detection (LCD). Although moving object tracking (MOT) is necessary for planning and decisions, it is often accomplished separately. To jointly solve SLAM and MOT for complex urban driving scenarios, this paper presents a high performance method named 4D-SLAM based on the fusion of LiDAR and IMU. The integration of SLAM and MOT is formulated as a joint posterior probability problem based on a dynamic Bayesian network (DBN) and is implemented with the following four sequential stages: preprocessing, moving object detection and tracking, odometry estimation and mapping. In the preprocessing stage, the motion distortion uncertainty caused by LiDAR scanning is first compensated for, and the initial LiDAR motion is estimated. In the moving object detection and tracking stage, we exploit a CNN-based segmentation network to detect the potential moving objects first, then the states of the potential moving objects are optimized by an unscented Kalman filter (UKF). In the odometry estimation stage, the distinctive planar and edge features extracted from a static background point cloud are used for the odometry estimation, and a two-step Levenberg-Marquardt optimization method is adopted to solve the 6DOF pose across consecutive scans. In the mapping stage, the mapping based on the pose estimation result and LCD are realized, and graph-based global optimization is exploited to further improve the map consistency for large scale environment. The comprehensive experiments with the open source dataset KITTI and the data collected by us show that the presented method not only outperforms the SOTA SLAM methods in terms of trajectory and mapping accuracy but can also detect and track moving objects efficiently.

Highlights

  • Simultaneous localization and mapping (SLAM) is a fundamental prerequisite for an autonomous mobile robot to navigate an unknown indoor/outdoor environment [1], which is widely used in virtual reality, autonomous driving, etc

  • To jointly solve the SLAM and moving object tracking (MOT) problems by the fusion of 3-D LiDAR and IMU, we present a 4D-SLAM approach to improve the accuracy of static SLAM and the efficiency of tracking in a challenging dynamic environment

  • We unify SLAM and MOT into a common framework, aim to improve the accuracy of localization and the consistency of the mapping, while simultaneously tracking the moving objects, which is a key part of scene understanding for selfdriving

Read more

Summary

Introduction

Simultaneous localization and mapping (SLAM) is a fundamental prerequisite for an autonomous mobile robot to navigate an unknown indoor/outdoor environment [1], which is widely used in virtual reality, autonomous driving, etc. Great efforts have been devoted to achieving real-time SLAM with vision-based [2][3] or LiDAR-based [4] methods. The vision-based methods have advantages in terms of LCD and low cost, but their sensitivities to illumination and viewpoint changes limit their application. The LiDAR-based method can even work at night, and its high accuracy and reliability make it a main perception sensor in self-driving vehicles. Most of the proposed methods are based on the implicit assumption that the surrounding environment is stationary. This assumption is often in conflict with real applications. Taking the selfdriving vehicle as example, when it runs on an urban street, VOLUME XX, 2017

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.