In the pursuit of precision within Simultaneous Localization and Mapping (SLAM), multi-sensor fusion emerges as a validated strategy with vast potential in robotics applications. This work presents GPC-LIVO, an accurate and robust LiDAR-Inertial-Visual Odometry system that integrates geometric and photometric information into one composite measurement model with point-wise updating architecture. GPC-LIVO constructs a belief factor model to assign different weights on geometric and photometric observations in the measurement model and adopts an adaptive error-state Kalman filter state estimation back-end to dynamically estimate the covariance of two observations. Since LiDAR points have larger measurement errors at endpoints and edges, we only fuse photometric information for LiDAR planar features and propose a corresponding validation method based on the associated image plane. Comprehensive experimentation is conducted on GPC-LIVO, encompassing both publicly available data sequences and data collected from our bespoke hardware setup. The results conclusively establish the better performance of our proposed system compare to other state-of-art odometry frameworks, and demonstrate its ability to operate effectively in various challenging environmental conditions. GPC-LIVO outputs states estimation at a high frequency(1-5 kHz, varying based on the processed LiDAR points in a frame) and achieves comparable time consumption for real-time running.
Read full abstract