Abstract

With collaborative robots and the recent developments in manufacturing technologies, physical interactions between humans and robots represent a vital role in performing collaborative tasks. Most previous studies have focused on robot motion planning and control during the execution of the task. However, further research is required for direct physical contact for human-robot or robot-robot interactions, such as co-manipulation. In co-manipulation, a human operator manipulates a shared load with a robot through a semi-structured environment. In such scenarios, a multi-contact point with the environment during the task execution results in a convoluted force/toque signature that is difficult to interpret. Therefore, in this paper, a muscle activity sensor in the form of an electromyograph (EMG) is employed to improve the mapping between force/torque and displacements in co-manipulation tasks. A suitable mapping was identified by comparing the root mean square error amongst data-driven models, mathematical models, and hybrid models. Thus, a robot was shown to effectively and naturally perform the required co-manipulation with a human. This paper’s proposed hypotheses were validated using an unseen test dataset and a simulated co-manipulation experiment, which showed that the EMG and data-driven model improved the mapping of the force/torque features into displacements.

Highlights

  • Robots in the industry are starting to transfer from confined spaces into areas that are shared with humans, which reduces the operational cost for several industrial applications [1]

  • The Hybrid Modelling Approach (HM) (Figure 10) root mean square error (RMSE) values were (0.051, 0.056, 0.051)m, which is comparable to the data-driven approach. We propose that these results occur due to the unknown dynamics naturally originating from the human body that allow for an adaptive non-linear change of stiffness and compliance, which is not captured in the simplified mathematical model (MM)

  • The accuracy of the fitted models was tested using unseen, randomly split test data, which illustrated that the sequential RNN trained on Features 1 (F1) features had the lowest RMSE compared to other Machine Learning (ML) models

Read more

Summary

Introduction

Robots in the industry are starting to transfer from confined spaces into areas that are shared with humans, which reduces the operational cost for several industrial applications [1]. The co-existence of the human and the robot, raises many critical challenges regarding the safety of the human, tasks scheduling, and system evaluations [2]. To tackle these challenges, researchers in human-robot collaboration (HRC) focus on improving the production efficiency, safety, and collaboration quality between humans and robots. Several approaches established a combination of both human cognitive and perceptual abilities with the robot’s endurance to perform a collaborative task. This included intended physical contact between humans and robots during actions, such as hand-overs, co-manipulation, co-drilling, and many other applications [4,5]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call