Abstract

Abstract. Accurate registration of sparse sequential point clouds data frames acquired by a 3D light detection and ranging (LiDAR) sensor like VLP-16 is a prerequisite for the back-end optimization of general LiDAR SLAM algorithms to achieve a globally consistent map. This process is also called LiDAR odometry. Aiming to achieve lower drift and robust LiDAR odometry in less structured outdoor scene using a low-cost wheeled robot-borne laser scanning system, a segment-based sampling strategy for LiDAR odometry is proposed in this paper. Proposed method was tested in two typical less structured outdoor scenes and compared with other two state of the art methods. The results reveal that the proposed method achieves lower drift and significantly outperform the state of the art.

Highlights

  • In the last ten years, 3D light detection and ranging (LiDAR) SLAM (Simultaneous Localization And Mapping) is an active research topic in Robotics and 3D Vision(Zhang and Singh, 2017)

  • Backpack and handheld mobile mapping system (MMS) which generally are equipped with laser scanners, cameras, inertial measurement units (IMU) utilizes this technology to derive the position and orientation (POSE) of the MMS for 3D mapping (Nüchter et al, 2015; Kaarta Stencil 2, 2018; Leica BLK2GO, 2019; Karam et al, 2019)

  • LiDAR SLAM consists of two modules(Nüchter et al, 2007): 1) Front-end (LiDAR odometry) which uses sequential LiDAR scans to estimate the sensor’s pose in realtime

Read more

Summary

INTRODUCTION

In the last ten years, 3D LiDAR SLAM (Simultaneous Localization And Mapping) is an active research topic in Robotics and 3D Vision(Zhang and Singh, 2017). LiDAR SLAM consists of two modules(Nüchter et al, 2007): 1) Front-end (LiDAR odometry) which uses sequential LiDAR scans to estimate the sensor’s pose in realtime. Zhang and Singh (2014) propose a realtime lidar odometry and mapping method called LOAM. Deschaud (2018) proposes a scan-to-model matching method with the Implicit Moving Least Squares (IMLS) surface representation They use the previous relative transformation to unwarp the current frame point cloud data assuming that the egomotion is relatively similar between two consecutive scans. To further improve the pose estimation, Chen et al (2019) add semantic constrains derived by deep neural network to the frame-to-model ICP problem based on SuMa. From the above, most methods rely on the feature points extracted from the raw point cloud to do the scan matching. Based sampling strategy to filter the point cloud, ensuring a more stable feature point extraction to achieve lower drift LiDAR odometry in such scenes

SEGMENT-BASED LIDAR ODOMETRY
Point Clouds Segmentation
LiDAR Odometry
Experiment data
Comparison with other methods and analysis
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call