Abstract

In manufacturing, traditional task pre-programming methods limit the efficiency of human–robot skill transfer. This paper proposes a novel task-learning strategy, enabling robots to learn skills from human demonstrations flexibly and generalize skills under new task situations. Specifically, we establish a markerless vision capture system to acquire continuous human hand movements and develop a threshold-based heuristic segmentation algorithm to segment the complete movements into different movement primitives (MPs) which encode human hand movements with task-oriented models. For movement primitive learning, we adopt a Gaussian mixture model and Gaussian mixture regression (GMM-GMR) to extract the optimal trajectory encapsulating sufficient human features and utilize dynamical movement primitives (DMPs) to learn for trajectory generalization. In addition, we propose an improved visuo-spatial skill learning (VSL) algorithm to learn goal configurations concerning spatial relationships between task-relevant objects. Only one multioperation demonstration is required for learning, and robots can generalize goal configurations under new task situations following the task execution order from demonstration. A series of peg-in-hole experiments demonstrate that the proposed task-learning strategy can obtain exact pick-and-place points and generate smooth human-like trajectories, verifying the effectiveness of the proposed strategy.

Highlights

  • Recent advances in artificial intelligence and sensor technology have heightened the need for robots to perform assembly tasks autonomously

  • Inspired by research on vision-based action recognition [33] that possible Grasp and Release points may occur at the local lowest points of human hand palm motions, we propose a heuristic segmentation algorithm based on human hand centroid velocities, as depicted in Algorithm 1

  • We focus on discrete movements and encode each degree of freedom (DOF) in Cartesian space with a separated dynamical movement primitives (DMPs) described by the canonical system: τ ẋ = −α x x and the transformation system: τ ż = αz ( β z ( G − y) − z) + f τ ẏ = z, (9)

Read more

Summary

Introduction

Recent advances in artificial intelligence and sensor technology have heightened the need for robots to perform assembly tasks autonomously. The applications in manufacturing remain a significant challenge, since traditional industrial robots deployed in production lines are pre-programmed for a specific task in a carefully structured environment. To overcome these challenges on different levels, the field of industrial robotics is moving towards Human–Robot Collaboration (HRC) [1,2,3,4,5,6,7]. In the area of HRC, Robot Learning from Demonstration (LfD) [8] provides a natural and intuitive mechanism for humans to teach robots new skills without relying on professional knowledge. Robots first observe human demonstrations, extract task-relevant features, derive the optimal policy between the world states and actions, reproduce and generalize tasks in different situations, and refine policy during practice [9] (See Figure 1).

Methods
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.