Abstract

Highly robust and accurate localization using on-board sensors in the absence of GNSS is a key issue for long-term autonomous navigation of vehicles. However, the limited sensing capabilities of a single sensor cannot resist the effects of feature degradation and high-speed motion in complex and challenging scenarios, which results in the fragile of existing LiDAR-based and visual-based localization algorithms. To solve these problems, we propose a highly robust and accurate localization system by fusing information from multi-sensors. Firstly, we propose a global localization method that leverages deep learning models for processing visual information to provide an accurate initial position for vehicles, which solves the problem of global localization of vehicles in the absence of GNSS. Then, we propose a highly robust and accurate LiDAR-Vision-IMU localization method based on pose graph optimization, which integrates LiDAR-based poses and visual-inertial-based poses. The proposed method detects sensor degradation in real time to dynamically adjust the fusion weights, which solves the problem of low localization performance of a single sensor. Finally, we conduct experiments in complex and challenging scenarios, such as urban, tunnels, and unstructured mountains. Experimental results show that our method can achieve a global localization error of less than 0.5 m in complex scenes. In unstructured mountainous scenarios of length 21.1 km, the root mean square errors (RMSE) of translation and rotation are 0.67 m and 0.048rad, respectively. Compared to most existing algorithms, our method achieves the most robust and accurate localization performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.