Abstract

With increased complexity and individualization of processes and products, the manufacturing industry experiences growing pressure to automate. To meet these constantly growing requirements regarding the flexibility of production and logistic systems, intelligent robotic systems that can adapt dynamically to changing process conditions are essential. Deep reinforcement learning provides a solution and offers a paradigm of learning adaptive control strategies through autonomous agents. However, the corresponding training procedure is based on trial-and-error interactions with an environment and is thus very data inefficient and costly in real-world scenarios. Transfer learning approaches can mitigate these costs and improve learning by leveraging control capabilities acquired in other settings, such as simulations, or from previously learning a different task. In this paper, we investigate the transfer of trajectories between different robot models. We propose a methodology that maps trajectory demonstrations from a source robot into trajectories of a target robot by using forward and inverse kinematics models. We introduce a similarity measure, that one the one hand compares different robot models using key poses along the kinematic chains, and on the other hand captures the efficiency of transitions. This similarity measure is used to select from the mapped trajectories those which are feasible, efficient and retain the general shape of the robotic arm. Finally, we perform behavioral cloning to pre-train a deep reinforcement learning policy for the target domain and demonstrate significant benefits compared to learning the target policy from scratch.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call