The integration of human–robot collaboration yields substantial benefits, particularly in terms of enhancing flexibility and efficiency within a range of mass-personalized manufacturing tasks, for example, small-batch customized product inspection and assembly/disassembly. Meanwhile, as human–robot collaboration lands broader in manufacturing, the unstructured scene and operator uncertainties are increasingly involved and considered. Consequently, it becomes imperative for robots to execute in a safe and adaptive manner rather than solely relying on pre-programmed instructions. To tackle it, a systematic solution for safe robot motion generation in human–robot collaborative activities is proposed, leveraging mixed-reality technologies and Deep Reinforcement Learning. This solution covers the entire process of collaboration starting with an intuitive interface that facilitates bare-hand task command transmission and scene coordinate transformation before the collaboration begins. In particular, mixed-reality devices are employed as effective tools for representing the state of humans, robots, and scenes. This enables the learning of an end-to-end Deep Reinforcement Learning policy that addresses both the uncertainties in robot perception and decision-making in an integrated manner. The proposed solution also implements policy simulation-to-reality deployment, along with motion preview and collision detection mechanisms, to ensure safe robot motion execution. It is hoped that this work could inspire further research in human–robot collaboration to unleash and exploit the powerful capabilities of mixed reality.
Read full abstract