Abstract

Abstract Efficient path planning methods for robot arms are crucial to ensure the quality and safety of their completing various tasks. Compared to traditional manual instruction, Reinforcement Learning (RL) based path planning methods show better adaptability for complex working scenarios. However, the training of RL is usually time-consuming with limited success rate. To tackle this problem, we propose an adaptive path planning approach for robot arm based on Inverse Kinematics (IK) and Deep Reinforcement Learning (DRL) in a pick-and-place context. A judgement mechanism is developed to adaptively select IK or RL based method according to the results of early-stage collision detection. We separate the pick and place task into three sequential curriculums (approaching, grabbing and placing) with modified reward functions to speed up the training process and achieve a higher success rate. The proposed approach is validated with a physical robot arm supported by a high-fidelity digital twin model. The experiment results show that our proposed approach outperforms traditional RL based method with improved training speed and guaranteed performance in collision avoidance and path accuracy. This work contributes to the practical deployment of RL based path planning method for digital twin-enabled robot arm in smart manufacturing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call