Abstract

The development of simultaneous localization and mapping (SLAM) technology plays an important role in robot navigation and autonomous vehicle innovation. The ORB-SLAM2 is a unified SLAM solution for monocular, binocular, and RGBD cameras which constructs a sparse feature point map for real-time positioning. However, a sparse map based approach cannot effectively meet the requirements of robot navigation, environment reconstruction, and other tasks. In this paper, a dense mapping thread is added to the existing ORB-SLAM2 system. The depth map and color image obtained by the stereo matching of a binocular camera are used to generate a three-dimensional point cloud for keyframes; then, the point cloud is fused by tracking and optimizing the motion track of a feature frame to obtain a real-time point cloud map. Through the experiments conducted on the KITTI dataset and the real environment under the ROS, it is proved that the proposed system constructs a clear three-dimensional point cloud map while constructing an accurate trajectory.

Highlights

  • Simultaneous localization and mapping (SLAM) approaches receive information through a specific sensor without prior information in order to estimate an object’s position and motion as well as establishing an environmental model [1] [2]

  • We propose an improved hybrid SLAM method that combines feature-based visual odometry (VO) and dense mapping by depth estimation

  • RELATED WORK we review the related work with respect to the two fields that we integrate within our framework, i.e. methods for visual SLAM and depth estimation with the use of stereo cameras

Read more

Summary

INTRODUCTION

Simultaneous localization and mapping (SLAM) approaches receive information through a specific sensor without prior information in order to estimate an object’s position and motion as well as establishing an environmental model [1] [2]. The back end receives the position and pose of the camera measured by VO and loop detection information These data are optimized to obtain a globally consistent trajectory and map. VO (front end) estimations for motion using images can be classified into feature-based and direct methods [4]. The camera motion and scene geometry are computed using epipolar geometry This approach has an important inherent limitation: only feature information can be used in VO and mapping. We propose an improved hybrid SLAM method that combines feature-based VO and dense mapping by depth estimation.

RELATED WORK
POINT CLOUD POST-PROCESSING
IMPLEMENTATION AND EXPRIMENT
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call