Abstract

Human–Robot Collaboration (HRC) is a term used to describe tasks in which robots and humans work together to achieve a goal. Unlike traditional industrial robots, collaborative robots need to be adaptive; able to alter their approach to better suit the situation and the needs of the human partner. As traditional programming techniques can struggle with the complexity required, an emerging approach is to learn a skill by observing human demonstration and imitating the motions; commonly known as Learning from Demonstration (LfD). In this work, we present a LfD methodology that combines an ensemble machine learning algorithm (i.e. Random Forest (RF)) with stochastic regression, using haptic information captured from human demonstration. The capabilities of the proposed method are evaluated using two collaborative tasks; co-manipulation of an object (where the human provides the guidance but the robot handles the objects weight) and collaborative assembly of simple interlocking parts. The proposed method is shown to be capable of imitation learning; interpreting human actions and producing equivalent robot motion across a diverse range of initial and final conditions. After verifying that ensemble machine learning can be utilised for real robotics problems, we propose a further extension utilising Weighted Random Forest (WRF) that attaches weights to each tree based on its performance. It is then shown that the WRF approach outperforms RF in HRC tasks. • Ensemble Learning methods can be utilised to teach robots co-manipulation/assembly. • Co-manipulation skill can be demonstrated by two humans, or human and robot. • Human–human and human–robot skills were captured using Random Forest (RF) models. • The overall performance was enhanced by weighting RF trees based on their RMSE. • The co-manipulation and co-assembly tasks were intended to validate fitted models.

Highlights

  • Over the last decade, there has been significant advancement in many areas of robotics

  • We present a new weighting mechanism which extends Random Forest (RF) from a collaborative model into a competitor model, since the output of RF model is the average output of all trees within the RF model, while the Weighted Random Forest (WRF) weight each tree based on trees performance on test dataset

  • This paper focused on methods of transferring human skills to an industrial robot, using a Learning from Demonstration (LfD) framework

Read more

Summary

Introduction

There has been significant advancement in many areas of robotics. For collaborative robots (cobots) an essential area of improvement has been in safety, with commercial systems in operation in many industrial roles. These typically work through the use of contact sensors (either tactile or force based), which detect resistance to motion and immediately stop the robot to prevent injury. Despite these improvements, there is still much to explore in the area. Halting the robot upon detection of an external force is safe, it is unsuitable for any task involving direct Human–Robot Collaboration (HRC). Tasks such as holding components in place for a human to fasten together (i.e. a co-assembly task [1]) or moving heavy objects into a specific place under human guidance (i.e. a co-manipulation task [2,3], such as shown in Fig. 1), cannot be achieved if the robot must stop at the first external stimulus

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call