Abstract

Autonomous driving emphasizes precise multi-sensor fusion positioning on limit resource embedded systems. LiDAR-centered sensor fusion system serves as a mainstream navigation system due to its insensitivity to illumination and viewpoint change. However, these types of systems suffer from handling large-scale sequential LiDAR data using limited resources on board, leading LiDAR-centralized sensor fusion unpractical. As a result, hand-crafted features such as plane and edge are leveraged in majority mainstream positioning methods to alleviate this unsatisfaction, triggering a new cornerstone in LiDAR Inertial sensor fusion. However, such super light weight feature extraction, although it achieves real-time constraint in LiDAR-centered sensor fusion, encounters severe vulnerability under high speed rotational or translational perturbation. In this paper, we propose a sparse tensor based LiDAR Inertial fusion method for autonomous driving embedded system. Leveraging the power of sparse tensor, the global geometrical feature is fetched so that the point cloud sparsity defect is alleviated. Inertial sensor is deployed to conquer the time-consuming step caused by the coarse level point-wise inlier matching. We construct our experiments on both representative dataset benchmarks and realistic scenes. The evaluation results show the robustness and accuracy of our proposed solution compared to classical methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call