Abstract

As it is well known, some versions of the Pepper robot provide poor depth perception due to the lenses it has in front of the tridimensional sensor. In this paper, we present a method to improving that faulty 3D perception. Our proposal is based on a combination of the actual depth readings of Pepper and a deep learning-based monocular depth estimation. As shown, the combination of both of them provides a better 3D representation of the scene. In previous works we made an initial approximation of this fusion technique, but it had some drawbacks. In this paper we analyze the pros and cons of the Pepper readings, the monocular depth estimation method and our previous fusion method. Finally, we demonstrate that the proposed fusion method outperforms them all.

Highlights

  • In recent years, the interest for humanoid and social robotics has grown steadily

  • We propose a method to improve the point clouds provided by the Pepper robot v1.8a but it can be used to enhance all point cloud-based cameras

  • RELATED WORKS This paper aims to overcome several existing threedimensional sensor weaknesses by merging point clouds provided by a real depth camera and an estimated depth map from a deep learning approach

Read more

Summary

INTRODUCTION

The interest for humanoid and social robotics has grown steadily. This expectancy has been fueled by the recent advances in materials, devices and artificial intelligence. The Pepper robot is intended to be deployed at indoor environments and has a clear social appeal This robot is equipped with a range of different sensors including color cameras, laser, ultrasonic sensor, touch surfaces and bumper switches. The robot is used for a variety of research purposes that includes object recognition using 3D data, such as [1], or SLAM, such as [2], [3] or [4] To implement these methods on it would be a hard task because of the faulty depth camera. In addition to that specific issue, there are a variety of other problems that can affect all time of flight and structured light cameras These sensors provide low density point clouds, or fail on specular surfaces.

RELATED WORKS
FUSIONV2
EXPERIMENTATION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call