Abstract

In robotics, the ultimate goal of reinforcement learning is to endow robots with the ability to learn, improve, adapt and reproduce tasks with dynamically changing constraints based on exploration and autonomous learning. We give a summary of the state-of-the-art of reinforcement learning in the context of robotics, in terms of both algorithms and policy representations. Numerous challenges faced by the policy representation in robotics are identified. Three recent examples for the application of reinforcement learning to real-world robots are described: a pancake flipping task, a bipedal walking energy minimization task and an archery-based aiming task. In all examples, a state-of-the-art expectation-maximization-based reinforcement learning is used, and different policy representations are proposed and evaluated for each task. The proposed policy representations offer viable solutions to six rarely-addressed challenges in policy representations: correlations, adaptability, multi-resolution, globality, multi-dimensionality and convergence. Both the successes and the practical difficulties encountered in these examples are discussed. Based on insights from these particular cases, conclusions are drawn about the state-of-the-art and the future perspective directions for reinforcement learning in robotics.

Highlights

  • Endowing robots with human-like abilities to perform motor skills in a smooth and natural way is one of the important goals of robotics

  • We propose an approach that allows us to change the complexity of the policy representation dynamically, while the reinforcement learning is running, without losing any of the collected data and without having to restart the learning

  • We show that Reinforcement learning (RL) in combination with regression yields an extremely fast-converging algorithm, and we demonstrate it in practice using the iCubhumanoid robot to quickly learn the skill of archery

Read more

Summary

Introduction

Endowing robots with human-like abilities to perform motor skills in a smooth and natural way is one of the important goals of robotics. The approaches for teaching new skill to robots have evolved significantly, and currently, there are three well-established types of approaches: direct programming, imitation learning and reinforcement learning. – Teleoperation: This is the process of remotely controlling the robot’s body using another input device, such as a joystick or a haptic device This approach shares many similarities with kinesthetic teaching in terms of advantages and disadvantages. (predicted via extrapolation) Only specifying the goal that the robot must achieve, without evaluating the intermediate progress

H Medium
State-of-the-Art Reinforcement Learning Algorithms in Robotics
Challenges for the Policy Representation in Robotics
State-of-the-Art Policy Representations in Robotics
Example A
Task Description
Proposed Compact Encoding with Coupling
Experiment
Experimental Results
Example B
Energy Minimization Problem
Problems with Fixed Policy Parameterization
Evolving Policy Parameterization
Evaluation of Evolving Policy Parameterization
Bipedal Walking Experiment
Example C
Description of the Archery Task
Learning Algorithm 1
Learning Algorithm 2
Image Processing Algorithm
Experimental Results on the iCub Robot
Summary and Comparison of the Robot Learning Tasks
The Future of Reinforcement Learning in Robotics
10. Beyond Reinforcement Learning
11. Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.