Abstract

Due to image noise, image blur, and inconsistency between depth data and color image, the accuracy and robustness of the pairwise spatial transformation computed by matching extracted features of detected key points in existing sparse Red Green Blue-Depth (RGB-D) Simultaneously Localization And Mapping (SLAM) algorithms are poor. Considering that most indoor environments follow the Manhattan World assumption and the Manhattan Frame can be used as a reference to compute the pairwise spatial transformation, a new RGB-D SLAM algorithm is proposed. It first performs the Manhattan Frame Estimation using the introduced concept of orientation relevance. Then the pairwise spatial transformation between two RGB-D frames is computed with the Manhattan Frame Estimation. Finally, the Manhattan Frame Estimation using orientation relevance is incorporated into the RGB-D SLAM to improve its performance. Experimental results show that the proposed RGB-D SLAM algorithm has definite improvements in accuracy, robustness, and runtime.

Highlights

  • Simultaneous Localization and Mapping (SLAM), which aims to acquire the structure of an unknown environment and at the same time estimate the sensor pose with respect to this structure, is an essential task for the autonomy of a robot

  • Considering the Red Green Blue-Depth (RGB-D) SLAM is only applicable for indoor applications and the Manhattan Frame (MF) of the indoor scene is fixed, the MF can be used as a reference to compute the pairwise spatial transformation

  • In conventional RGB-D SLAM, the estimated trajectory is usually divided into several fragments due to the failure of feature matching of detected key points in pairwise spatial transformation computation caused by image noise, image blur and the inconsistency between the depth data and RGB image, which increases the complexity of the optimization problem of the back-end of RGB-D SLAM

Read more

Summary

Introduction

Simultaneous Localization and Mapping (SLAM), which aims to acquire the structure of an unknown environment and at the same time estimate the sensor pose with respect to this structure, is an essential task for the autonomy of a robot. Kinect Fusion can obtain real-time depth measurements and a highly detailed voxel-based map simultaneously Their algorithms are only suitable for small workspaces owing to high memory consumption. Dense SLAM algorithms enable good localization and mapping with high quality scene representation [8,9] They are prone to failure in environments with poor structure and time drift. Sparse RGB-D SLAM algorithms typically run quickly owing to the sensor’s pose estimation based on sparse point features Such a lightweight implementation ensures a wide range of applications. A new RGB-D SLAM algorithm is proposed by extending Manhattan Frame estimation (MFE) using orientation relevance to RGB-D image sequence. A novel algorithm for RGB-D SLAM with MFE using orientation relevance is proposed for low-texture indoor environments.

Method
Overview of the Original Method
Manhattan Frame Estimation Using Orientation Relevance
Computation of Pairwise Spatial Transformation with the MFE
Improved RGB-D SLAM
Experiments
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call