Abstract

We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range) of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN) and Genetic Programming (GP), are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robot's kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robot's workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.

Highlights

  • Today the majority of robots are still applied in industrial settings, where they are mainly used as programmable machines to solve automation tasks with pre-defined, pre-programmed actions in very structured environments

  • We find that Artificial Neural Networks (ANN) and Genetic Programming (GP) are not just faster and have lower complexity than traditional techniques, and learn without the need for extensive calibration procedures

  • Perception for robotic systems has been investigated for a long time, e.g., [2,3,4], it remains a difficult issue to solve in robotic systems [5]

Read more

Summary

Introduction

Today the majority of robots are still applied in industrial settings, where they are mainly used as programmable machines to solve automation tasks with pre-defined, pre-programmed actions in very structured environments. The field has been moving towards extending the use of robotic systems into areas where they can co-exists and help humans [1]. Proposed applications range from household tasks, helping in a hospital, to elderly care, grocery shopping, etc. A main hurdle is that the world humans live in is an inherently ‘unstructured’ and dynamic environment. A robot needs to be able to perceive and understand its surroundings, as the state of its workplace and the objects in it can no longer be known a priori. A spatial understanding, i.e., to identify and localize objects autonomously and robustly with respect to itself, is www.intechopen.com

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call