Abstract

Current methods require robots to be reprogrammed for every new task, consuming many engineering resources. This work focuses on integrating real and simulated environments for our proposed “Internet of Skills,” which enables robots to learn advanced skills from a small set of expert demonstrations. By expanding on recent work in the areas of Learning from Demonstrations (LfD) and Reinforcement Learning (RL), we can train robot control policies that can not only effectively complete a given task but also do so with greater performance than the expert demonstrations used to train the policy. In this work, we create simulated environments to train RL algorithms for the task of inverse kinematics and obstacle avoidance. Many state-of-the-art RL algorithms are compared, and we provide a detailed analysis of the state space and parameters chosen. Lastly, we utilize a Vicon motion tracking system and train the robot agent to follow trajectories given by a human operator. Our results show that reinforcement learning algorithms such as proximal policy optimization can develop control policies that are capable of complex control tasks that integrate with the real world, an important first step towards developing a system that can autonomously learn new skills from human demonstrations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.