Abstract

Robots have achieved great success in assembly tasks in known and structured environments. However, assembly tasks in unknown and unstructured environments are still an open problem due to the limited adaptability and robustness of conventional methods and algorithms. Reinforcement learning represents a potential solution to this problem as it can learn any task through interaction with the environment without the need for prior knowledge or information, that's why it can be adaptable to new tasks. However, reinforcement learning requires a lot of interaction data with the environment to properly learn a given task, and changes in the task may require learning again from scratch, that's why it is sample inefficient. Meta reinforcement learning can be both adaptable and data efficient during test time when facing a new task, as it learns a fast adaptation procedure using prior knowledge gained through training on various tasks. In this paper, we show how meta reinforcement learning can be used to successfully preform peg-in-hole assembly tasks in unknown environments, with high uncertainty in the hole position and poor performance sensors, within small number of interaction trajectories with the environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call