Abstract

In recent years, many excellent works on visual-inertial SLAM and laser-based SLAM have been proposed. Although inertial measurement unit (IMU) significantly improve the motion estimate performance by reducing the impact of illumination variation or texture-less region on visual tracking, tracking failures occur when in such an environment for a long time. Similarly, when in structure-less environments, laser module will fail since lack of sufficient geometric features. Besides, motion estimation by moving lidar has the problem of distortion since range measurements are received continuously. To solve these problems, we propose a robust and high-accuracy visual-inertial-laser SLAM system. The system starts with a visual-inertial tightly-coupled method for motion estimation, followed by scan matching to further optimize the estimation and register point cloud on the map. Furthermore, we enable modules to be adjusted automatically and flexibly. That is, when one of these modules fails, the remaining modules will undertake the motion-tracking task. For further improving the accuracy, loop closure and proximity detection are implemented to eliminate drift accumulation. When loop or proximity is detected, we perform six degree-of-freedom (6-DOF) pose graph optimization to achieve the global consistency. The performance of our system is verified on public dataset, and the experimental results show that the proposed method achieves superior accuracy against other state-of-the-art algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call