Abstract

Employing skin-like tactile sensors on robots enhances both the safety and usability of collaborative robots by adding the capability to detect human contact. Unfortunately, simple binary tactile sensors alone cannot determine the context of the human contact—whether it is a deliberate interaction or an unintended collision that requires safety manoeuvres. Many published methods classify discrete interactions using more advanced tactile sensors or by analysing joint torques. Instead, we propose to augment the intention recognition capabilities of simple binary tactile sensors by adding a robot-mounted camera for human posture analysis. Different interaction characteristics, including touch location, human pose, and gaze direction, are used to train a supervised machine learning algorithm to classify whether a touch is intentional or not with an F1-score of 86%. We demonstrate that multimodal intention recognition is significantly more accurate than monomodal analyses with the collaborative robot Baxter. Furthermore, our method can also continuously monitor interactions that fluidly change between intentional or unintentional by gauging the user’s attention through gaze. If a user stops paying attention mid-task, the proposed intention and attention recognition algorithm can activate safety features to prevent unsafe interactions. We also employ a feature reduction technique that reduces the number of inputs to five to achieve a more generalized low-dimensional classifier. This simplification both reduces the amount of training data required and improves real-world classification accuracy. It also renders the method potentially agnostic to the robot and touch sensor architectures while achieving a high degree of task adaptability. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —Whenever a user interacts physically with a robot, such as in collaborative manufacturing, the robot may respond to unintended touch inputs from the user. This may be through body collisions or that the user is suddenly distracted and is no longer paying attention to what they are doing. We propose an easy-to-implement method to augment safety of physical human-robot collaboration by determining whether the touch from a user is intentional or not through the use of robot-mounted basic touch sensors and computer vision. The algorithm examines the location of the user’s hands relative to the touched sensors in addition to observing where the user is looking. Machine learning is then used to classify in real-time, with an F1-score of 86%, whether a touch is intentional or not such that the robot can react accordingly. The method is particularly applicable in collaborative manufacturing contexts, but can also be applied anywhere where a user physically interacts with a robot. We demonstrate the utility of the method in enhancing safety during human-robot collaboration through a simulated collaborative manufacturing scenario with the robot Baxter, but the method can easily be adapted to other systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call