Abstract

To address the challenges posed by the dynamic environment for Simultaneous Localization and Mapping (SLAM), a detection-first tightly-coupled LiDAR-Visual-Inertial SLAM incorporating lidar, camera, and inertial measurement unit (IMU) is proposed. Firstly, the point cloud clustering with semantic labels are obtained by fusing image and point cloud information. Then, a tracking algorithm is applied to obtain the information of the motion state of the targets. Afterwards, the tracked dynamic targets are utilized to eliminate extraneous feature points. Finally, a factor graph is used to jointly optimize the IMU pre-integration, and tightly couple the laser odometry and visual odometry within the system. To validate the performance of the proposed SLAM framework, both public datasets (KITTI and UrbanNav) and actual scene data are tested. The experimental results show that compared with LeGO-LOAM, LIO-SAM and LVI-SAM for public dataset, the root mean squared error (RMSE) of proposed algorithm is decreased by 44.56 % (4.47 m) and 4.15 % (4.62 m) in high dynamic scenes and normal scenes, respectively. Through actual scene data, the proposed algorithm mitigates the impact of dynamic objects on map building directly.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.