Abstract

Estimation of translation between consecutive frames, i.e., odometry, plays an important role in autonomous navigation. This paper presents an odometry estimation method using sparse LiDAR points and image feature points. In case of sparse LiDAR measurements, it is difficult to accurately estimate depth at image feature points. Image feature points with low-accuracy depth cause misconvergence in odometry optimization. To improve the robustness to the misconvergence, a new method with a Gaussian process that estimates not only the depth at image feature points but also the variance is proposed. By using this variance, it estimates the residual of image features in the world coordinate with depth, or in the image coordinate without depth. This allows more accurate and robust estimation than conventional methods in case of sparse LiDAR points. In an experiment with simulated sparse LiDAR points from the KITTI dataset, the proposed method is confirmed to estimate the odometry more accurately than conventional methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call