Abstract

Dynamic object detection, state estimation, and map-building are crucial for autonomous robot systems and intelligent transportation applications in urban scenarios. Most current LiDAR Simultaneous Localization and Mapping (SLAM) systems operate on the assumption that the observed environment is static. However, the overall accuracy and robustness of a SLAM system can be compromised by dynamic objects in the environment. Aiming at the problem of inaccurate odometry estimation and wrong mapping caused by the existing LiDAR SLAM method which cannot detect the dynamic objects, we study the SLAM problem of robots and unmanned vehicles equipped with LiDAR traveling in the dynamic urban scenes. We propose a fast LiDAR-only model-free dynamic objects detection method, which uses the spatial and temporal information of point cloud through a convolutional neural network (CNN), and the detection accuracy is improved by 35 use spatial information. We further integrate it into a state-of-the-art LiDAR SLAM framework to improve the SLAM performance. Firstly, the range image constructed by LiDAR point cloud is used for ground extraction and non-ground point clustering. Then, the motion of objects in the scene is estimated by the difference between adjacent frames, and the segmented objects are further divided into dynamic objects and static objects by their motion features. After that, the stable feature points are extracted from the static objects. Finally, the pose transformation of adjacent frames is solved by matching feature point pairs. We evaluated the accuracy and robustness of our system on datasets with different challenging dynamic environments, and the results show our system has significant improvements in accuracy and robustness of odometry and mapping, while still maintain real-time performance, which is sufficient for autonomous robot systems and intelligent transportation applications in urban scenarios.

Highlights

  • Dynamic object detection, state estimation, and map-building are crucial for autonomous robot systems and intelligent transportation applications in urban scenarios

  • Inspired by the above discussion, we propose a new r LiDAR-only odometry and mapping method suitable for dynamic environments, which realizes robust pose estimation, and builds a static map filtering out dynamic objects in environments, which promotes the development of dynamic object detection and LiDAR Simultaneous Localization and Mapping (SLAM)

  • The results show that the accuracy of dynamic object detection is improved by 35% to 86% compared with methods that only use spatial information, and the accuracy and robustness of SLAM in these scenarios are significantly improved

Read more

Summary

INTRODUCTION1

Simultaneous Localization and Mapping (SLAM) is a basic prerequisite for intelligent robots and a necessary capability for driverless vehicles. Inspired by the above discussion, we propose a new r LiDAR-only odometry and mapping method suitable for dynamic environments, which realizes robust pose estimation, and builds a static map filtering out dynamic objects in environments, which promotes the development of dynamic object detection and LiDAR SLAM. We compare the proposed approach and system with several other state-of-the-art approaches and systems that have shown excellent results in dynamic urban environments, and the experimental results show that in dynamic scenarios, the accuracy of dynamic object detection is greatly improved compared with methods that only use spatial information, and the odometry and mapping accuracy are significantly improved. By embedding the proposed dynamic object detection module into a state-of-the-art LiDAR SLAM framework, the accuracy of odometry and mapping is greatly improved in dynamic environments compared with methods which are under the static-world assumption.

RELATED WORK
LiDAR Odometry and Mapping
LiDAR Dynamic Object Detection
LiDAR SLAM in Dynamic Environments
PROPOSED SYSTEM
Description and Definitions
System Overview
Segmentation
Pre-Registration and Dynamic Object Detection
Feature Extraction
LiDAR Odometry
LiDAR Mapping
EXPERIMENTS
Method
Feature Extraction Comparison
Odometry Comparison
Mapping Comparison
Runtime Comparison
Findings
V.CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.