Abstract

We consider the problem of autonomous acquisition of manipulation skills where problem-solving strategies are initially available only for a narrow range of situations. We propose to extend the range of solvable situations by autonomous play with the object. By applying previously-trained skills and behaviors, the robot learns how to prepare situations for which a successful strategy is already known. The information gathered during autonomous play is additionally used to train an environment model. This model is exploited for active learning and the generation of novel preparatory behaviors compositions. We apply our approach to a wide range of different manipulation tasks, e.g., book grasping, grasping of objects of different sizes by selecting different grasping strategies, placement on shelves, and tower disassembly. We show that the composite behavior generation mechanism enables the robot to solve previously-unsolvable tasks, e.g., tower disassembly. We use success statistics gained during real-world experiments to simulate the convergence behavior of our system. Simulation experiments show that the learning speed can be improved by around 30% by using active learning.

Highlights

  • Humans perform complex object manipulations so effortlessly that at first sight it is hard to believe that this problem is still unsolved in modern robotics

  • Skill Learning by Autonomous Robotic Playing we introduce a novel approach for autonomous learning that makes it easy to embed state-of-the-art research on specific manipulation problems

  • We introduced a novel way of combining model-free and model-based reinforcement learning methods for autonomous skill acquisition

Read more

Summary

Introduction

Humans perform complex object manipulations so effortlessly that at first sight it is hard to believe that this problem is still unsolved in modern robotics This becomes less surprising if one considers how many different abilities are involved in human object manipulation. These abilities span from control (e.g., moving arms and fingers, balancing the body), via perception (e.g., vision, haptic feedback) to planning of complex tasks. Most of these are not yet solved in research by themselves, not to speak of combining them in order to design systems that can stand up to a comparison with humans. In order to take a step toward human-like robots

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.