Abstract

This paper proposes a new Simultaneous Localization and Mapping (SLAM) method on the basis of graph-based optimization through the combination of the Light Detection and Ranging (LiDAR), RGB-D camera, encoder and Inertial Measurement Unit (IMU). It can conduct joint positioning of four sensors by taking advantaging of the unscented Kalman filter (UKF) to design the related strategy of the 2D LiDAR point cloud and RGB-D camera point cloud. 3D LiDAR point cloud information generated by the RGB-D camera under the 2D LiDAR has been added into the new SLAM method in the sequential registration stage, and it can match the 2D LiDAR point cloud and the 3D RGB-D point cloud by using the method of the Correlation Scan Matching (CSM); In the loop closure detection stage, this method can further verify the accuracy of the loop closure after the 2D LiDAR matching by describing 3D point cloud. Additionally, this new SLAM method has been verified feasibility and availability through the processes of theoretical derivation, simulation experiment and physical verification. As a result, the experiment shows that the multi-sensor SLAM framework designed has a good mapping effect, high precision and accuracy.

Highlights

  • Simultaneous Localization and Mapping (SLAM) technology is one of essential technologies to achieve autonomous movement of mobile robots, widely applied into mobile robots, self-driving, drones, and autonomous underwater vehicles, etc

  • The authors in [19] took the autonomous underwater vehicle (AUV) equipped with Mechanical Scanning Imaging Sonar (MSIS) as the main sensor of the research object and introduced a method based on the Smooth Variable Structure Filter (SVSF)

  • The authors in [30] combined Light Detection and Ranging (LiDAR) and vision cameras to fuse laser point clouds with image feature points by adjusting sparse attitude to optimize the robot’s position pose, while using the word bag model for loop closure detection and adding constraints to further optimize the position pose, the algorithm in the results show that the positioning accuracy of the algorithm is improved and the accuracy of loop closure detection is further improved, environmental maps are more accurate than a single laser map construction

Read more

Summary

INTRODUCTION

SLAM technology is one of essential technologies to achieve autonomous movement of mobile robots, widely applied into mobile robots, self-driving, drones, and autonomous underwater vehicles, etc. The authors in [19] took the autonomous underwater vehicle (AUV) equipped with Mechanical Scanning Imaging Sonar (MSIS) as the main sensor of the research object and introduced a method based on the Smooth Variable Structure Filter (SVSF) This filter merges the information from various sensors (Doppler Velocity Logs (DVL), the MTi motion reference unit (MRU)), and the observations from the feature extraction based line to estimate the vehicle’s motion and to construct a map in partially structured environments. The relative pose of the neighboring moments can be obtained by the data scan matching of the LiDAR itself, which is recorded as sk , The problem of joint calibration of odometer and LiDAR is expressed as the speed (vl(t), vr (t)) of the two left and right wheels of the robot at a given time t ∈ [t1, tn] and the relative pose sk

CALIBRATION PRINCIPLE OF VISION SENSOR AND LiDAR
LiDAR LOOP CLOSURE DETECTION COMBINED WITH 3D POINT CLOUD DESCRIPTORS
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.