Abstract

Registration methods for point clouds have become a key component of many SLAM systems on autonomous vehicles. However, an accurate estimate of the uncertainty of such registration is a key requirement to a consistent fusion of this kind of measurements in a SLAM filter. This estimate, which is normally given as a covariance in the transformation computed between point cloud reference frames, has been modelled following different approaches, among which the most accurate is considered to be the Monte Carlo method. However, a Monte Carlo approximation is cumbersome to use inside a time-critical application such as online SLAM. Efforts have been made to estimate this covariance via machine learning using carefully designed features to abstract the raw point clouds [1]. However, the performance of this approach is sensitive to the features chosen. We argue that it is possible to learn the features along with the covariance by working with the raw data and thus we propose a new approach based on PointNet [2]. In this work, we train this network using the KL divergence between the learned uncertainty distribution and one computed by the Monte Carlo method as the loss. We test the performance of the general model presented applying it to our target use-case of SLAM with an autonomous underwater vehicle (AUV) restricted to the 2-dimensional registration of 3D bathymetric point clouds.

Highlights

  • O VER the last few years, sensors capable of providing dense representations of 3D environments as raw data, such as RGB-D cameras, LiDAR or multibeam sonar, have become popular in the SLAM community

  • The reason to focus on GICP as opposed to iterative closest point (ICP) is that this method works better on the kind of bathymetric point clouds produced from surveys of unstructured seabed, as discussed in [6]

  • We have presented PointNetKL, an artificial neural network (ANN) designed to learn the uncertainty distribution of the GICP registration process from unordered sets of multidimensional points

Read more

Summary

Introduction

O VER the last few years, sensors capable of providing dense representations of 3D environments as raw data, such as RGB-D cameras, LiDAR or multibeam sonar, have become popular in the SLAM community. These sensors provide accurate models of the geometry of a scene in the form of sets of points, which allows for a dense representation of maps easy to visualise and render. This letter was recommended for publication by Associate Editor Prof. Sprague contributed to this work.) (Corresponding author: Christopher Iliffe Sprague.)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call