Abstract

Quickly grasping the surrounding environment's information and the location of the vehicle is the key to achieving automatic driving. However, accurate and robust localization and mapping are still challenging for field vehicles and robots due to the characteristics of emptiness, terrain changeability, and Global Navigation Satellite System (GNSS)-denied in complex field environments. In this study, an LVI-SAM-based lidar, inertial, and visual fusion using simultaneous localization and mapping (SLAM) algorithm was proposed to solve the problem of localization and mapping for vehicles in such open, bumpy, and Global Positioning System (GPS)-denied field environments. In this method, a joint lidar front end of pose estimation and correction was designed using the Super4PCS, Iterative Closest Point (ICP), and Normal Distributions Transform (NDT) algorithms and their variants. The algorithm can balance localization accuracy and real-time performance by carrying out lower-frequency pose correction based on higher-frequency pose estimation. Experimental results from the complex field environment show that, compared with LVI-SAM, the proposed method can reduce the translational error of localization by about 4.7% and create a three-dimensional point cloud map of the environment in real time, realizing the high-precision and high-robustness localization and mapping of the vehicle in complex field environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call