A harmonious synergy between robotic performance and well-being in human-robot collaboration: A vision and key recommendations
A harmonious synergy between robotic performance and well-being in human-robot collaboration: A vision and key recommendations
11
- 10.3390/s23146416
- Jul 14, 2023
- Sensors (Basel, Switzerland)
12
- 10.1080/01691864.2021.2011780
- Dec 15, 2021
- Advanced Robotics
542
- 10.1080/15228053.2023.2233814
- Jul 3, 2023
- Journal of Information Technology Case and Application Research
122
- 10.1016/j.birob.2023.100131
- Oct 28, 2023
- Biomimetic Intelligence and Robotics
356
- 10.1037/apl0000106
- Jan 1, 2017
- Journal of Applied Psychology
15
- 10.1080/23311886.2021.1970880
- Jan 1, 2021
- Cogent Social Sciences
14
- 10.3390/machines12020113
- Feb 7, 2024
- Machines
32
- 10.1109/lra.2019.2893018
- Apr 1, 2019
- IEEE Robotics and Automation Letters
5
- 10.1007/s11423-021-10023-6
- Jul 14, 2021
- Educational Technology Research and Development
360
- 10.1111/joms.12549
- Jan 9, 2020
- Journal of Management Studies
- Research Article
4
- 10.3390/pr13030832
- Mar 12, 2025
- Processes
Industrial robotics has shifted from rigid, task-specific tools to adaptive, intelligent systems powered by artificial intelligence (AI), machine learning (ML), and sensor integration, revolutionizing efficiency and human–robot collaboration across manufacturing, healthcare, logistics, and agriculture. Collaborative robots (cobots) slash assembly times by 30% and boost quality by 15%, while reinforcement learning enhances autonomy, cutting errors by 30% and energy use by 20%. Yet, this review transcends descriptive summaries, critically synthesizing these trends to expose unresolved tensions in scalability, cost, and societal impact. High implementation costs and legacy system incompatibilities hinder adoption, particularly for SMEs, while interoperability gaps—despite frameworks, like OPC UA—stifle multi-vendor ecosystems. Ethical challenges, including workforce displacement and cybersecurity risks, further complicate progress, underscoring a fragmented field where innovation outpaces practical integration. Drawing on a systematic review of high-impact literature, this study uniquely bridges technological advancements with interdisciplinary applications, revealing disparities in economic feasibility and equitable access. It critiques the literature’s isolation of trends—cobots’ safety, ML’s autonomy, and perception’s precision—proposing the following cohesive research directions: cost-effective modularity, standardized protocols, and ethical frameworks. By prioritizing scalability, interoperability, and sustainability, this paper charts a path for robotics to evolve inclusively, offering actionable insights for researchers, practitioners, and policymakers navigating this dynamic landscape.
- Research Article
- 10.3390/systems13080631
- Jul 26, 2025
- Systems
Human-Robot Collaboration (HRC) is pivotal for flexible, worker-centric manufacturing in Industry 5.0, yet dynamic task allocation remains difficult because operator states—fatigue and skill—fluctuate abruptly. I address this gap with a hybrid framework that couples real-time perception and double-estimating reinforcement learning. A Convolutional Neural Network (CNN) classifies nine fatigue–skill combinations from synthetic physiological cues (heart-rate, blink rate, posture, wrist acceleration); its outputs feed a Double Deep Q-Network (DDQN) whose state vector also includes task-queue and robot-status features. The DDQN optimises a multi-objective reward balancing throughput, workload and safety and executes at 10 Hz within a closed-loop pipeline implemented in MATLAB R2025a and RoboDK v5.9. Benchmarking on a 1000-episode HRC dataset (2500 allocations·episode−1) shows the hybrid CNN+DDQN controller raises throughput to 60.48 ± 0.08 tasks·min−1 (+21% vs. rule-based, +12% vs. SARSA, +8% vs. Dueling DQN, +5% vs. PPO), trims operator fatigue by 7% and sustains 99.9% collision-free operation (one-way ANOVA, p < 0.05; post-hoc power 1 − β = 0.87). Visual analyses confirm responsive task reallocation as fatigue rises or skill varies. The approach outperforms strong baselines (PPO, A3C, Dueling DQN) by mitigating Q-value over-estimation through double learning, providing robust policies under stochastic human states and offering a reproducible blueprint for multi-robot, Industry 5.0 factories. Future work will validate the controller on a physical Doosan H2017 cell and incorporate fairness constraints to avoid workload bias across multiple operators.
- Research Article
- 10.24193/rm.2025.1.1
- Jan 1, 2025
- Robotica & Management
This paper presents a RAG architecture for the Pepper robot to support real-time, multimodal interaction in industrial environments. By balancing local and cloud processing, the system improves task assistance, response accuracy, and user experience, while addressing both technical and psychological aspects of human-robot collaboration.
- Research Article
- 10.3390/drones9080516
- Jul 23, 2025
- Drones
The use of aerial robots for inspection and maintenance in industrial settings demands high maneuverability, precise control, and reliable measurements. This study explores the development of a fully customized unmanned aerial manipulator (UAM), composed of a tilting drone and an articulated robotic arm, designed to perform non-destructive in-contact inspections of iron structures. The system is intended to operate in complex and potentially hazardous environments, where autonomous execution is supported by shared-control strategies that include human supervision. A parallel force–impedance control framework is implemented to enable smooth and repeatable contact between a sensor for ultrasonic testing (UT) and the inspected surface. During interaction, the arm applies a controlled push to create a vacuum seal, allowing accurate thickness measurements. The control strategy is validated through repeated trials in both indoor and outdoor scenarios, demonstrating consistency and robustness. The paper also addresses the mechanical and control integration of the complex robotic system, highlighting the challenges and solutions in achieving a responsive and reliable aerial platform. The combination of semi-autonomous control and human-in-the-loop operation significantly improves the effectiveness of inspection tasks in hard-to-reach environments, enhancing both human safety and task performance.
- Book Chapter
68
- 10.1007/978-1-4899-7668-0_7
- Jan 1, 2016
Human-Robot Collaboration (HRC) on the factory floor has opened a new realm of manufacturing in real-world settings. In such applications, a human and robot work together with each other as coworkers while HRC plays a critical role in safety, productivity, and flexibility. In particular, human-robot trust determines his/her acceptance and hence allocation of autonomy to a robot, which alter the overall task efficiency and human workload. Inspired by well-known human factors research, we develop a time-series trust model for human-robot collaboration tasks, which is a function of prior trust, robot performance, and human performance. The robot performance is evaluated by its flexibility to keep pace with the human coworker and is molded as the difference between human and robot speed. The human performance in doing physical tasks is directly related to his/her muscle fatigue level. We use the muscle fatigue and recovery dynamics to capture the fatigue level of the human body when performing repetitive kinesthetic tasks, which are typical types of human motions in manufacturing. The robot speed can be controlled in three different modes: manually by the associate, autonomously through robust intelligence algorithms, or collaboratively by the combination of manual and autonomous inputs. We first simulate a typical 9-h work day for human robot collaborative tasks and implement the proposed trust model and the three control schemes. Furthermore, we experimentally validate our model and control schemes by conducting a series of human-in-the-loop experiments using the Rethink Robotics Baxter robot.
- Research Article
69
- 10.1109/lra.2021.3062787
- Apr 1, 2021
- IEEE Robotics and Automation Letters
Advancements in robot technology are allowing for increasing integration of humans and robots in shared space manufacturing processes. While individual task performance of the robotic assistance and human operator can be separately optimized, the interaction between humans and robots can lead to emergent effects on collaborative performance. Thus, the performance benefits of increased automation in robotic assistance and its impact by human factors need to be considered. As such, this letter examines the interplay of operator sex, their cognitive fatigue states, and varying levels of automation on collaborative task performance, operator situation awareness and perceived workload, and physiological responses (heart rate variability; HRV). Sixteen participants, balanced by sex, performed metal polishing tasks directly with a UR10 collaborative robot under different fatigued states and with varying levels of robotic assistance. Perceived fatigue, situation awareness, and workload were measured periodically, in addition to continuous physiological monitoring and three task performance metrics: task efficiency, accuracy, and precision, were obtained. Higher robotic assistance demonstrated direct task performance benefits. However, unlike females, males did not perceive the performance benefits as better with higher automation. A relationship between situation awareness and automation was observed in both the HRV signals and subjective measures, where increased robot assistance reduced the attentional supply and task engagement of participants. The consideration of the interplay between human factors, such as operator sex and their cognitive states, and robot factors on collaborative performance can lead to improved human-robot collaborative system designs.
- Conference Article
4
- 10.1109/smc42975.2020.9283228
- Oct 11, 2020
In this paper, a time-driven performance-aware mathematical model for trust in the robot is proposed for a Human-Robot Collaboration (HRC) framework. The proposed trust model is based on both the human operator and the robot performances. The human operator’s performance is modeled based on both the physical and cognitive performances, while the robot performance is modeled over its unpredictable, predictable, dependable, and faithful operation regions. The model is validated via different simulation scenarios. The simulation results show that the trust in the robot in the HRC framework is governed by robot performance and human operator’s performance and can be improved by enhancing the robot performance.
- Research Article
4
- 10.3390/app13095429
- Apr 26, 2023
- Applied Sciences
The automation of bin-picking processes has been a research topic for almost two decades. General-purpose equipment, however, still does not show adequate success rates to find application in most industrial tasks. Human–robot collaboration in bin–picking tasks can increase the success rate by exploiting human perception and handling skills and the robot ability to perform repetitive tasks. The aim of this paper, starting from a general-purpose industrial bin picking equipment comprising a 3D–structured light vision system and a collaborative robot, consists in enhancing its performance and possible applications through human–robot collaboration. To achieve successful and fluent human–robot collaboration, the robotic workcell must meet some hardware and software requirements that are defined below. The proposed strategy is tested in some sample tests: the results of the experimental tests show that collaborative functions can be particularly useful to overcome typical bin picking failures and to improve the fault tolerance of the system, increasing its flexibility and reducing downtimes.
- Conference Article
3
- 10.1109/hsi47298.2019.8942609
- Jun 1, 2019
Affect-based intelligent motion control for human-robot collaborative assembly in manufacturing was developed, and the effects of the dynamic affect-based control on human-robot collaboration (HRC) and assembly performance were investigated. An anthropomorphic robot with affect display ability was used to collaborate with a human in an assembly task where the human and the robot collaboratively assembled three parts. Firstly, in order to receive bioinspiration, the affective features in a human-human collaborative assembly task were studied. Secondly, based on the affective features of humans, an affect-based intelligent motion control strategy for the robot was proposed so that the robot could dynamically adjust its affective states like humans with changes in task situations during the human-robot collaborative assembly. The proposed affect-based motion control was experimentally evaluated for HRC and assembly performance, and the results were compared with that when the robot collaborated with its human counterpart with no affect and a static affect displays. The results showed that the static affect produced better HRC and assembly performance than that the robot produced with no affect. However, the dynamic affect produced significantly better HRC and assembly performance than that the static and the no affect displays produced. The results encourage to employ anthropomorphic robots with dynamic affect-based motion control strategies to collaborate with humans in manufacturing to improve HRC and manufacturing performance.
- Research Article
- 10.1007/s10846-025-02237-0
- Mar 20, 2025
- Journal of Intelligent & Robotic Systems
Human–robot collaboration is crucial in various industries, making accurate prediction of human arm movements essential for seamless interaction. This paper presents a significant advancement in collaborative robotics by developing a hybrid model that enhances the accuracy and interpretability of human motion predictions. By integrating a Physics-Infused Model with Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) networks, our approach effectively captures intricate temporal dependencies while incorporating physical constraints, leading to more robust and realistic predictions. The hybrid model was successfully implemented on an ABB IRB 120 robot, demonstrating its practical applicability in real-world scenarios. Our results show that this model outperforms conventional methods, particularly in predicting human arm positions during collaborative tasks. The key contribution of this work lies in the integration of deep learning with physics-based principles, setting a new benchmark for predictive accuracy in human–robot collaboration. This research not only enhances the performance of collaborative robots but also opens the door for similar hybrid models to be applied in other fields where accurate motion prediction is critical.
- Research Article
2
- 10.3390/electronics13061044
- Mar 11, 2024
- Electronics
The objective was to investigate the impacts of the robot’s dynamic affective expressions in task-related scenarios on human–robot collaboration (HRC) and performance in human–robot collaborative assembly tasks in flexible manufacturing. A human–robot hybrid cell was developed to facilitate a human co-worker and a robot to collaborate to assemble a few parts into a final product. The collaborative robot was a humanoid manufacturing robot with the ability to display its affective states due to changes in task scenarios on its face. The assembly task was divided into several subtasks, and based on an optimization strategy, the subtasks were optimally allocated to the human and the robot. A computational model of the robot’s affective states was derived inspired by that of humans following the biomimetic approach, and an affect-based motion planning strategy for the robot was proposed to enable the robot to adjust its motions and behaviors with task situations and communicate (inform) the situations to the human co-worker through affective expressions. The HRC and the assembly performance for the affect-based motion planning were experimentally evaluated based on a comprehensive evaluation scheme and were compared with two alternative conditions: (i) motion planning that did not display affective states, and (ii) motion planning that displayed text messages instead of displaying affective states to communicate the situations to the human co-worker. The results clearly showed that the dynamic affect-based motion planning produced significantly better HRC and assembly performance than that produced by motion planning associated with the display of no affective states or text messages. The results encouraged employing manufacturing robots with dynamic affective expressions to collaborate with humans in flexible assembly in manufacturing to improve HRC and assembly performance.
- Supplementary Content
84
- 10.3389/frobt.2022.799522
- Feb 3, 2022
- Frontiers in Robotics and AI
The degree of successful human-robot collaboration is dependent on the joint consideration of robot factors (RF) and human factors (HF). Depending on the state of the operator, a change in a robot factor, such as the behavior or level of autonomy, can be perceived differently and affect how the operator chooses to interact with and utilize the robot. This interaction can affect system performance and safety in dynamic ways. The theory of human factors in human-automation interaction has long been studied; however, the formal investigation of these HFs in shared space human-robot collaboration (HRC) and the potential interactive effects between covariate HFs (HF-HF) and HF-RF in shared space collaborative robotics requires additional investigation. Furthermore, methodological applications to measure or manipulate these factors can provide insights into contextual effects and potential for improved measurement techniques. As such, a systematic literature review was performed to evaluate the most frequently addressed operator HF states in shared space HRC, the methods used to quantify these states, and the implications of the states on HRC. The three most frequently measured states are: trust, cognitive workload, and anxiety, with subjective questionnaires universally the most common method to quantify operator states, excluding fatigue where electromyography is more common. Furthermore, the majority of included studies evaluate the effect of manipulating RFs on HFs, but few explain the effect of the HFs on system attributes or performance. For those that provided this information, HFs have been shown to impact system efficiency and response time, collaborative performance and quality of work, and operator utilization strategy.
- Research Article
4
- 10.1177/00187208241254696
- May 28, 2024
- Human factors
The purpose of this study is to identify the potential biomechanical and cognitive workload effects induced by human robot collaborative pollination task, how additional cues and reliability of the robot influence these effects and whether interacting with the robot influences the participant's anxiety and attitude towards robots. Human-Robot Collaboration (HRC) could be used to alleviate pollinator shortages and robot performance issues. However, the effects of HRC for this setting have not been investigated. Sixteen participants were recruited. Four HRC modes, no cue, with cue, unreliable, and manual control were included. Three categories of dependent variables were measured: (1) spine kinematics (L5/S1, L1/T12, and T1/C7), (2) pupillary activation data, and (3) subjective measures such as perceived workload, robot-related anxiety, and negative attitudes towards robotics. HRC reduced anxiety towards the cobot, decreased joint angles and angular velocity for the L5/S1 and L1/T12 joints, and reduced pupil dilation, with the "with cue" mode producing the lowest values. However, unreliability was detrimental to these gains. In addition, HRC resulted in a higher flexion angle for the neck (i.e., T1/C7). HRC reduced the physical and mental workload during the simulated pollination task. Benefits of the additional cue were minimal compared to no cues. The increased joint angle in the neck and unreliability affecting lower and mid back joint angles and workload requires further investigation. These findings could be used to inform design decisions for HRC frameworks for agricultural applications that are cognizant of the different effects induced by HRC.
- Research Article
3
- 10.3389/frobt.2022.943261
- Sep 27, 2022
- Frontiers in robotics and AI
Adoption of human–robot collaboration is hindered by barriers in collaborative task design. A new approach for solving these problems is to empower operators in the design of their tasks. However, how this approach may affect user welfare or performance in industrial scenarios has not yet been studied. Therefore, in this research, the results of an experiment designed to identify the influences of the operator’s self-designed task on physical ergonomics and task performance are presented. At first, a collaborative framework able to accept operator task definition via parts’ locations and monitor the operator’s posture is presented. Second, the framework is used to tailor a collaborative experience favoring decision autonomy using the SHOP4CF architecture. Finally, the framework is used to investigate how this personalization influences collaboration through a user study with untrained personnel on physical ergonomics. The results from this study are twofold. On one hand, a high degree of decision autonomy was felt by the operators when they were allowed to allocate the parts. On the other hand, high decision autonomy was not found to vary task efficiency nor the MSD risk level. Therefore, this study emphasizes that allowing operators to choose the position of the parts may help task acceptance and does not vary operators’ physical ergonomics or task efficiency. Unfortunately, the test was limited to 16 participants and the measured risk level was medium. Therefore, this study also stresses that operators should be allowed to choose their own work parameters, but some guidelines should be followed to further reduce MSD risk levels.
- Conference Article
6
- 10.1061/9780784483961.060
- Mar 7, 2022
Amid the rapid development of robotic technologies and artificial intelligence, Human Robot Collaboration (HRC) has gained momentum in a variety of civil engineering applications. However, with robots only obeying redefined algorithms without human-like intelligence, there has not been a widely accepted method that enables a complete integration of human robot team in knowledge-and-experience-based tasks, such as building inspection. To enhance the efficiency of inspection tasks, a deeper insight must be gained into the advantages and limitations of human intelligence and robotic algorithms, respectively, and how these two can be seamlessly integrated. As the first step, in this paper we designed an experiment to compare human and robot performance in a building inspection task. A quadrupedal robot is simulated in ROS (Robot Operating System) Gazebo, which automatically navigate in and scan the buildings with SLAM (simultaneous localization and mapping) and RRT (rapidly exploring random tree) algorithms, while human experts finishing the same inspection task in Virtual Reality. The total identified structural defaults, inspection time, and routes are recorded and compared. The result shows that there is an apparent pattern difference of human route plans with considerably better accuracy and efficiency in building inspection.
- Research Article
8
- 10.1017/s0263574723000383
- Apr 11, 2023
- Robotica
Admittance control of the robot is an important method to improve human–robot collaborative performance. However, it displays poor matching between admittance parameters and human–robot collaborative motion. This results in poor motion performance when the robot interacts with the changeable environment (human). Therefore, to improve the performance of human–robot collaboration, the human-like variable admittance parameter regulator (HVAPR) based on the change rate of interaction force is proposed by studying the human arm’s static and dynamic admittance parameters in human–human collaborative motion. HVAPR can generate admittance parameters matching with human collaborative motion. To test the performance of the proposed HVAPR, the human–robot collaborative motion experiment based on HVAPR is designed and compared with the variable admittance parameter regulator (VAPR). The satisfaction, recognition ratio, and recognition confidence of the two admittance parameter regulators are statistically analyzed via questionnaire. Simultaneously, the trajectory and interaction force of the robot are analyzed, and the performance of the human–robot collaborative motion is assessed and compared using the trajectory smoothness index and average energy index. The results show that HVAPR is superior to VAPR in human–robot collaborative satisfaction, robot trajectory smoothness, and average energy consumption.
- Conference Article
401
- 10.1145/2696454.2696497
- Mar 2, 2015
How do mistakes made by a robot affect its trustworthiness and acceptance in human-robot collaboration? We investigate how the perception of erroneous robot behavior may influence human interaction choices and the willingness to cooperate with the robot by following a number of its unusual requests. For this purpose, we conducted an experiment in which participants interacted with a home companion robot in one of two experimental conditions: (1) the correct mode or (2) the faulty mode. Our findings reveal that, while significantly affecting subjective perceptions of the robot and assessments of its reliability and trustworthiness, the robot's performance does not seem to substantially influence participants' decisions to (not) comply with its requests. However, our results further suggest that the nature of the task requested by the robot, e.g. whether its effects are revocable as opposed to irrevocable, has a significant impact on participants' willingness to follow its instructions.
- Conference Article
- 10.1109/icar53236.2021.9659356
- Dec 6, 2021
In this paper we discuss a methodology for learning human-robot collaboration tasks by human guidance. In the proposed framework, the robot learns the task in multiple repetitions of the task by comparing and adapting the performed trajectories so that the robot's performance naturally evolves into a collaborative behavior. When comparing the trajectories of two learning cycles, the problem of accurate phase determination arises because the imprecise phase determination affects the precision of the learned collaborative behavior. To solve this issue, we propose a new projection algorithm for measuring the similarity of two trajectories. The proposed algorithm was experimentally verified and compared to the performance of dynamic time warping in learning of human-robot collaboration tasks with Franka Emika Panda collaborative robot.
- Conference Article
3
- 10.1109/iros47612.2022.9981490
- Oct 23, 2022
A robot must comply with very restrictive safety standards in close human-robot collaboration applications. These standards limit the robot's performance because of speed reductions to avoid potentially large forces exerted on humans during collisions. On-robot capacitive proximity sensors (CPS) can serve as a solution to allow higher speeds and thus better productivity. They allow early reactive measures before contacts occur to reduce the forces during collisions. An open question on designing the systems is the selection of an adequate activation distance to trigger safety measures for a specific robot while considering latency and detection robustness. Furthermore, the systems' actual effectiveness of impact attenuation and performance gain has not been evaluated before. In this work, we define and conduct a unified test procedure based on collision experiments to determine these parameters and investigate the performance gain. Two capacitive proximity sensor systems are evaluated on this test strategy on two robots. A significant performance increase can be achieved, since a small detection distance doubles robot operation speed while maintaining the same contact force as without Capacitive Proximity Sensor (CPS). This work can serve as a reference guide for designing, configuring and implementing future on-robot CPS.
- Research Article
- 10.1016/j.arcontrol.2025.101022
- Oct 1, 2025
- Annual Reviews in Control
- Research Article
- 10.1016/j.arcontrol.2025.100994
- Jun 1, 2025
- Annual Reviews in Control
- Research Article
1
- 10.1016/j.arcontrol.2024.100984
- Jan 1, 2025
- Annual Reviews in Control
- Research Article
- 10.1016/j.arcontrol.2025.101010
- Jan 1, 2025
- Annual Reviews in Control
- Research Article
- 10.1016/j.arcontrol.2025.101028
- Jan 1, 2025
- Annual Reviews in Control
- Research Article
- 10.1016/j.arcontrol.2025.100989
- Jan 1, 2025
- Annual Reviews in Control
- Research Article
- 10.1016/j.arcontrol.2025.101009
- Jan 1, 2025
- Annual Reviews in Control
- Research Article
- 10.1016/j.arcontrol.2025.101027
- Jan 1, 2025
- Annual Reviews in Control
- Research Article
- 10.1016/j.arcontrol.2025.100991
- Jan 1, 2025
- Annual Reviews in Control
- Research Article
- 10.1016/j.arcontrol.2025.101008
- Jan 1, 2025
- Annual Reviews in Control
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.