Abstract

In this paper, SLAM systems are introduced using monocular and stereo visual sensors. The SLAM solutions are implemented in both indoor and outdoor. The SLAM samples have been taken in different modes, such as a straight line that enables us to measure the drift, in addition to the loop sample that is used to test the loop closure and its corresponding trajectory deformation. In order to verify the trajectory scale, a baseline method has been used. In addition, a ground truth has been captured for both indoor and outdoor samples to measure the biases and drifts caused by the SLAM solution. Both monocular and stereo SLAM data have been captured with the same visual sensors which in the stereo situation had a baseline of 20.00 cm. It has been shown that, the stereo SLAM localization results are 75% higher precision than the monocular SLAM solution. In addition, the indoor results of the monocular SLAM are more precise than the outdoor. However, the outdoor results of the stereo SLAM are more precise than the indoor results by 30%, which is a result of the small stereo baseline cameras. In the vertical SLAM localization component, the stereo SLAM generally shows 60% higher precision than the monocular SLAM results.

Highlights

  • Simultaneous localization and mapping (SLAM) is the procedure of building the map of the surrounding environment of a vehicle/rover and uses the computed map to determine the vehicle/rover location

  • Both monocular and stereo SLAM data have been captured with the same visual sensors which in the stereo situation had a baseline of 20.00 cm

  • The indoor results of the monocular SLAM are more precise than the outdoor

Read more

Summary

Introduction

Simultaneous localization and mapping (SLAM) is the procedure of building the map of the surrounding environment of a vehicle/rover and uses the computed map to determine the vehicle/rover location. SLAM process challenges are centered on methods enabling large-scale implementations in increasingly unstructured environments and especially in the GPS denied environment. The visual-depth SLAM [4] [5] [6] utilizes an integrated depth sensor with visual sensor such as Microsoft Kinect, Intel real sense, and Xtion Pro. The visual-depth sensors usually have a limited scan angle and range which results in a SLAM computation challenge for the large environments [7] [8] [9] [10] [11]. Laser-based SLAM can be computed in large number of environments with different scale due to the larger laser range. SLAM can be computed using three paradigms namely: Kalman filters, Particle filters and Graph-based [10] [11] [12] [13]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call