Abstract
The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their application in clinical settings and in commercial products. In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses. The analyzed data are from the publicly available MeganePro Dataset 1, that includes multimodal data from transradial amputees and able-bodied subjects while grasping numerous household objects with ten grasp types. A continuous grasp-type classification based on surface electromyography served as both intent detector and classifier. At the same time, the information provided by eye-hand coordination parameters, gaze data and object recognition in first-person videos allowed to identify the object a person aims to grasp. The results show that the inclusion of visual information significantly increases the average offline classification accuracy by up to 15.61 ± 4.22% for the transradial amputees and of up to 7.37 ± 3.52% for the able-bodied subjects, allowing trans-radial amputees to reach average classification accuracy comparable to intact subjects and suggesting that the robustness of hand prosthesis control based on grasp-type recognition can be significantly improved with the inclusion of visual information extracted by leveraging natural eye-hand coordination behavior and without placing additional cognitive burden on the user.
Highlights
The loss of a hand deprives an individual of an essential part of the body, and a prosthesis that can be controlled intuitively and reliably is essential to effectively restore the missing functionality
The results show that exploiting eye-hand coordination significantly increases the average classification accuracy for both intact subjects and amputees, suggesting that the robustness of hand prosthesis control based on grasp-type recognition can be improved significantly with the inclusion of visual information
We used the publicly available MeganePro Dataset 1, containing surface electromyography (sEMG), gaze, first-person video data collected from 15 transradial amputees and 30 able-bodied subjects performing grasping tasks on household objects in static and dynamic conditions
Summary
The loss of a hand deprives an individual of an essential part of the body, and a prosthesis that can be controlled intuitively and reliably is essential to effectively restore the missing functionality. Dexterous hand prostheses with notable mechanical capabilities are commercially available They commonly have independent digit actuation, Visuomotor Multimodal Robotic Hand Control active thumb opposition, sufficient grip force and sometimes a motorized wrist. These characteristics make such devices capable of performing a large variety of grasps that can substantially simplify the execution of activities of daily living (ADL) for hand amputees. Pattern recognition-based approaches are arguably the most investigated ones in scientific research They identify the grasp type by applying pattern recognition methods to the electrical activity of the remnant musculature recorded via surface electromyography (sEMG) (Hudgins et al, 1993; Scheme and Englehart, 2011; Jiang et al, 2012). The use of visual modalities is motivated by the natural eye-hand coordination behavior humans use during grasping, where information to plan the motor action
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have