Abstract

This article presents a method for mobile robotic systems to reduce the size of 3-D point cloud maps while achieving small localization errors even though the reduced maps are used for localization. Localization with dense depth maps especially in large environments could be computationally expensive, so not affordable for robotic systems with limited computational resources. Since such point cloud data collected using a multichannel 3-D LiDAR have representational redundancy, some amount of the data could be removed for compactness. However, existing methods demand annotations and descriptors defined by human experts or lose important spatial features for accurate localization. We propose a fully-automatic reduction method for reliable navigation of mobile systems with low computing power and limited storage. Given a full 3-D point cloud map, our method generates a graph whose vertices and edges represent robot poses and the similarity between the data collected from a pair of robot poses, respectively. Each robot pose is associated with a point cloud set. We find a dominating set of the graph, which is a subset of robot poses such that the robot can localize reliably only with the point clouds collected from those poses. Experimental results with indoor environments show that our system reduces the amount of map data up to 93% while not exceeding acceptable ranges of average translation and rotation errors, which are 0.25 m and 0.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$^\circ$</tex-math></inline-formula> , respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call