Abstract

Applicable human-robot collaboration requires intuitive recognition of human intention during shared work. A grasped object such as a tool held by the human provides vital information about the upcoming task. In this letter, we explore the use of a wearable device to non-visually recognize objects within the human hand in various possible grasps. The device is based on Force-Myography (FMG) where simple and affordable force sensors measure perturbations of forearm muscles. We propose a novel Deep Neural-Network architecture termed <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Flip-U-Net</i> inspired by the familiar U-Net architecture used for image segmentation. The Flip-U-Net is trained over data collected from several human participants and with multiple objects of each class. Data is collected while manipulating the objects between different grasps and arm postures. The data is also pre-processed with data augmentation and used to train a Variational Autoencoder for dimensionality reduction mapping. While prior work did not provide a transferable FMG-based model, we show that the proposed network can classify objects grasped by multiple new users without additional training efforts. Experiment with 12 test participants show classification accuracy of approximately 95% over multiple grasps and objects. Correlations between accuracy and various anthropometric measures are also presented. Furthermore, we show that the model can be fine-tuned to a particular user based on an anthropometric measure.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call