Accurate and long-distance depth estimation for visual landmarks is challenging in visual-inertial navigation systems (VINS). In visual-degenerated scenes with illumination changes, moving objects, or weak texture, depth estimation may be more difficult, resulting in poor robustness and accuracy. For low-speed robot navigation, we present a solid-state-LiDAR-enhanced VINS (LE-VINS) to improve the system robustness and accuracy in challenging environments. The point clouds from the solid-state LiDAR are projected to the visual keyframe with the inertial navigation system (INS) pose for depth association while compensating for the motion distortion. A robust depth-association method with an effective plane-checking algorithm is proposed to estimate the landmark depth. With the estimated depth, we present a LiDAR depth factor to construct accurate depth measurements for visual landmarks in factor graph optimization (FGO). The visual feature, LiDAR depth, and IMU measurements are tightly fused within the FGO framework to achieve maximum-a-posterior estimation. Field tests were conducted on a low-speed robot in large-scale challenging environments. The results demonstrate that the proposed LE-VINS yields significantly improved robustness and accuracy compared to the original VINS. Besides, LE-VINS exhibits superior accuracy than the state-of-the-art LiDAR-visual-inertial navigation system. LE-VINS also outperforms the existing LiDAR-enhanced method, benefiting from the robust depth-association algorithm and the effective LiDAR depth factor.
Read full abstract