Abstract

Feasible human-robot collaboration requires intuitive and fluent understanding of human motion in shared tasks. The object in hand provides the most valuable information about the intended task of a human. In this letter, we propose a simple and affordable approach where a wearable force-myography device is used to classify objects grasped by a human. The device worn on the forearm incorporates 15 force sensors that can imply about the configuration of the hand and fingers during grasping. Hence, a classifier is trained to easily identify various objects using data recorded while holding them. To augment the classifier, we propose an iterative approach in which additional signals are taken in real-time to increase certainty about the predicted object. We show that the approach provides robust classification where the device can be taken off and placed back while maintaining high accuracy. The approach also improves the performance of trained classifiers that initially produced low accuracy due to insufficient data or non-optimal hyper-parameters. Classification success rate of more than 97% is reached in a short period of time. Furthermore, we analyze the key locations of sensors on the forearm that provide the most accurate and robust classification.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.