Abstract

We propose a general self-supervised learning approach for spatial perception tasks, such as estimating the pose of an object relative to the robot, from onboard sensor readings. The model is learned from training episodes, by relying on: a continuous state estimate, possibly inaccurate and affected by odometry drift; and a detector, that sporadically provides supervision about the target pose. We demonstrate the general approach in three different concrete scenarios: a simulated robot arm that visually estimates the pose of an object of interest; a small differential drive robot using 7 infrared sensors to localize a nearby wall; an omnidirectional mobile robot that localizes itself in an environment from camera images. Quantitative results show that the approach works well in all three scenarios, and that explicitly accounting for uncertainty yields statistically significant performance improvements.

Highlights

  • Many robot perception tasks consist of interpreting sensor readings to extract high-level spatial information [1], such as the pose of an object of interest (OOI) with respect to the robot, or the pose of the robot itself in the environment

  • We demonstrate the generality of our approach by solving common tasks in the robotics field: OOI pose estimation with a robotic arm, and localization of mobile ground robots

  • The coefficient of determination is an adimensional measure of the quality of a regressor, which quantifies the amount of variance in the target variable (c) Pose of the OOI in the inertial frame on the additional testing scenario

Read more

Summary

Introduction

Many robot perception tasks consist of interpreting sensor readings to extract high-level spatial information [1], such as the pose of an object of interest (OOI) with respect to the robot, or the pose of the robot itself in the environment. In many realworld scenarios, collecting the necessary training sets is a fundamental problem; Self-supervised learning (SSL) in robotics aims at equipping robots with the ability to acquire their own training data, e.g. by using additional sensors as a source of supervision, without any external assistance. In some cases, this allows robots to acquire training data directly in the deployment environment.

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call