Abstract

Simultaneous localization and mapping is a fundamental process in robot navigation. We focus on LiDAR to complete this process in ground robots traveling on complex terrain by proposing GR-LOAM, a method to estimate robot ego-motion by fusing LiDAR, inertial measurement unit (IMU), and encoder measurements in a tightly coupled scheme. First, we derive a odometer increment model that fuses the IMU and encoder measurements to estimate the robot pose variation on a manifold. Then, we apply point cloud segmentation and feature extraction to obtain distinctive edge and planar features. Moreover, we propose an evaluation algorithm for the sensor measurements to detect abnormal data and reduce their corresponding weight during optimization. By jointly optimizing the cost derived from the LiDAR, IMU, and encoder measurements in a local window, we obtain low-drift odometry even on complex terrain. We use the estimated relative pose in the local window to reevaluate the matching distance across features and remove dynamic objects and outliers, thus refining the features before being fed to a mapping thread and increasing the mapping efficiency. In the back end, GR-LOAM uses the refined point cloud and tightly couples the IMU and encoder measurements with ground constraints to further refine the estimated pose by aligning the features on a global map. Results from extensive experiments performed in indoor and outdoor environments using real ground robot demonstrate the high accuracy and robustness of the proposed GR-LOAM for state estimation of ground robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call