Abstract

In recent years, myoelectric interfaces using surface electromyogram (EMG) signals have been developed for assisting people with physical disabilities. Especially, in the myoelectric interfaces for robotic hands or arms, decoding the user’s upper-limb movement intentions is cardinal to properly control the prosthesis. However, because previous experiments were implemented with only healthy subjects, the possibility of classifying reaching-to-grasping based on the EMG signals from the residual limb without the below-elbow muscles was not investigated yet. Therefore, we aimed to investigate the possibility of classifying reaching-to-grasping tasks using the EMG from the upper arm and upper body without considering wrist muscles for prosthetic users. In our study, seven healthy subjects, one trans-radial amputee, and one wrist amputee were participated and performed 10 repeatable 12 reaching-to-grasping tasks based on the Southampton Hand Assessment Procedure (SHAP) with 12 different weighted (light and heavy) objects. The acquired EMG was processed using the principal component analysis (PCA) and convolutional neural network (CNN) to decode the tasks. The PCA–CNN method showed that the average accuracies of the healthy subjects were 69.4 ± 11.4%, using only the EMG signals by the upper arm and upper body. The result with the PCA–CNN method showed 8% significantly higher accuracies than the result with the widely used time domain and auto-regressive-support vector machine (TDAR–SVM) method as 61.6 ± 13.7%. However, in the cases of the amputees, the PCA–CNN showed slightly lower performance. In addition, in the aspects of assistant daily living, because grip force is also important when grasping an object after reaching, the possibility of classifying the two light and heavy objects in each reaching-to-grasping task was also investigated. Consequently, the PCA–CNN method showed higher accuracy at 70.1 ± 9.8%. Based on our results, the PCA–CNN method can help to improve the performance of classifying reaching-to-grasping tasks without wrist EMG signals. Our findings and decoding method can be implemented to further develop a practical human–machine interface using EMG signals.

Highlights

  • Nowadays, the myoelectric interfaces based on electromyogram (EMG) have been developed for supporting the daily living of amputees

  • The significant differences were revealed between the “time domain and auto-regressive (TDAR)–support vector machine (SVM) (8ch)” and the “TDAR–SVM (6ch)” (t = 7.22, p = 0.002), between the “principal component analysis (PCA)–convolutional neural network (CNN) (8ch)”

  • We investigated the possibility of classifying reaching-to-grasping tasks using the EMG signals from the upper arm and upper body

Read more

Summary

Introduction

The myoelectric interfaces based on electromyogram (EMG) have been developed for supporting the daily living of amputees. Due to its ease of use and non-invasiveness for supporting daily living by interacting with external devices, the myoelectric interfaces have become a useful technology (Hargrove et al, 2007; Castellini et al, 2009). The time and frequency domainsbased various characteristics, as well as numerous optimal classifiers, have been studied in detail to improve the performance of the classification of the movement intent with varying degrees of success (Zardoshti-Kermani et al, 1995; Chu et al, 2007; Phinyomark et al, 2013; Ameri et al, 2014; Park et al, 2015; Kim et al, 2019). The classified user’s intentions are decoded as control commands for interfacing with external devices

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.