Abstract

In robotic applications, a key requirement for safe and efficient motion planning is the ability to map obstacle-free space in unknown, cluttered 3D environments. However, commodity-grade RGB-D cameras commonly used for sensing fail to register valid depth values on shiny, glossy, bright, or distant surfaces, leading to missing data in the map. To address this issue, we propose a framework leveraging probabilistic depth completion as an additional input for spatial mapping. We introduce a deep learning architecture providing uncertainty estimates for the depth completion of RGB-D images. Our pipeline exploits the inferred missing depth values and depth uncertainty to complement raw depth images and improve the speed and quality of free space mapping. Evaluations on synthetic data show that our approach maps significantly more correct free space with relatively low error when compared against using raw data alone in different indoor environments; thereby producing more complete maps that can be directly used for robotic navigation tasks. The performance of our framework is validated using real-world data.

Highlights

  • In recent years, depth sensors have become a core component in a variety of robotic applications, including scene reconstruction, exploration, and inspection

  • Our work focuses on the task of guided depth completion, where the goal is to predict the dense depth values at every pixel based on the raw depth and a paired colour image

  • While our approach is applicable for any Simultaneous Localisation and Mapping (SLAM) scenario, in this paper, we focus on mapping only to show improvements for free space mapping in unknown environments

Read more

Summary

Introduction

Depth sensors have become a core component in a variety of robotic applications, including scene reconstruction, exploration, and inspection. Commodity-grade RGB-D cameras, such as Microsoft Kinect and Intel RealSense, suffer from limited range and produce images with noise and missing data in view of surfaces that are too shiny, glossy, bright, or too far away. In robotic scenarios, this may lead to inefficient and inaccurate mapping performance when only the raw sensor data is used. Our goal is to create more complete spatial maps of cluttered 3D environments for robotic navigation purposes This is achieved by filling in holes found in raw depth images that are used for mapping

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call