Abstract

Pose estimation and map reconstruction are basic requirements for robotic autonomous behavior. In this paper, we propose a point–plane-based method to simultaneously estimate the robot’s poses and reconstruct the current environment’s map using RGB-D cameras. First, we detect and track the point and plane features from color and depth images, and reliable constraints are obtained, even for low-texture scenes. Then, we construct cost functions from these features, and we utilize the plane’s minimal representation to minimize these functions for pose estimation and local map optimization. Furthermore, we extract the Manhattan World (MW) axes on the basis of the plane normals and vanishing directions of parallel lines for the MW scenes, and we add the MW constraint to the point–plane-based cost functions for more accurate pose estimation. The results of experiments on public RGB-D datasets demonstrate the robustness and accuracy of the proposed algorithm for pose estimation and map reconstruction, and we show its advantages compared with alternative methods.

Highlights

  • This article is an extension of a recent conference paper [1] that presented the exploitation of plane features to estimate sensors’ poses for low-texture indoor environments

  • Our proposed system has two main parts: (1) we detect and track the point and plane features with respect to the local map for each new captured frame, and we estimate the current pose by solving the cost function that is constituted by the tracked features; and (2) we update the local map that consists of point–plane landmarks and keyframes for each new inserted keyframe, and we process the full bundle adjustment to obtain the global map if a loop is detected

  • We proposed a point–plane-based method to estimate robot poses and reconstruct the maps of scenes of indoor environments using an RGB-D camera

Read more

Summary

Introduction

This article is an extension of a recent conference paper [1] that presented the exploitation of plane features to estimate sensors’ poses for low-texture indoor environments. The robot’s pose and the scene’s map can be obtained by utilizing robotic sensors, such as wheel encoders, inertial measurement units [2,3,4], lasers [5,6], and cameras [7,8,9]. Among these solutions, the visual-based method is one of the more effective approaches because cameras can conveniently capture informative images to estimate the robot’s poses and perceive its surroundings. Feature points are generally absent in structural and low-texture

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call