Abstract

Localization and mapping are key requirements for autonomous mobile systems to perform navigation and interaction tasks. Iterative Closest Point (ICP) is widely applied for LiDAR scan-matching in the robotic community. In addition, the standard ICP algorithm only considers geometric information when iteratively searching for the nearest point. However, ICP individually cannot achieve accurate point-cloud registration performance in challenging environments such as dynamic environments and highways. Moreover, the computation of searching for the closest points is an expensive step in the ICP algorithm, which is limited to meet real-time requirements, especially when dealing with large-scale point-cloud data. In this paper, we propose a segment-based scan-matching framework for six degree-of-freedom pose estimation and mapping. The LiDAR generates a large number of ground points when scanning, but many of these points are useless and increase the burden of subsequent processing. To address this problem, we first apply an image-based ground-point extraction method to filter out noise and ground points. The point cloud after removing the ground points is then segmented into disjoint sets. After this step, a standard point-to-point ICP is applied into to calculate the six degree-of-freedom transformation between consecutive scans. Furthermore, once closed loops are detected in the environment, a 6D graph-optimization algorithm for global relaxation (6D simultaneous localization and mapping (SLAM)) is employed. Experiments based on publicly available KITTI datasets show that our method requires less runtime while at the same time achieves higher pose estimation accuracy compared with the standard ICP method and its variants.

Highlights

  • Localization and mapping are crucial tasks for autonomous mobile robot navigation in unknown environments

  • Some outliers that do not have common attributes are removed. These different clusters are merged into a new point cloud

  • We analyze the results of four modules including ground point removal, segmentation, Iterative Closest Point (ICP) and 6D Simultaneous localization and mapping (SLAM)

Read more

Summary

Introduction

Localization and mapping are crucial tasks for autonomous mobile robot navigation in unknown environments. GPS is one of the widely used solutions for localization, while it suffers from some drawbacks, such as multi-path effect, latency, which limit its application in the city areas and indoor environments [1]. Pose estimation based on inertial navigation systems (INS) and visual sensors has been widely studied over recent decades. INS estimates pose information through integrating acceleration and angular velocity, which are subject to unbounded accumulation errors due to bias and noise from inertial sensors [2]. Vision-based methods can obtain robust and accurate motion estimation; they are vulnerable to ambient lighting conditions [3].

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call