Abstract

In complex environments with long-term changes such as light, seasonal, and viewpoint changes, robust, accurate, and high-frequency global positioning based on light detection and ranging (LiDAR) map is still a challenge, which is crucial for autonomous vehicles or robots. To this end, a novel observation model that relies on the siamese multitask convolutional neural networks (CNNs) with multimodule cascaded is creatively presented in this article. In particular, a new pseudoimage representing LiDAR submap is designed to enrich scene texture and enhance rotation invariance. Besides, novel siamese CNNs that are coupled by NeXtVLAD and long short-term memory is designed for the first time, which can reliably predict similarity and quaternion at the same time. Finally, the predicted quaternion observation is integrated into the extended Kalman filter framework for multisensor fusion to achieve robust high-frequency global pose estimation. Extensive evaluations on KITTI, NCLT, and real-world datasets suggest that the proposed method not only obtains the remarkable precision-recall performance, but also effectively enhances and improves the robustness and accuracy of long-term positioning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call