Abstract

Surface electromyography (sEMG)-based gesture recognition systems provide the intuitive and accurate recognition of various gestures in human-computer interaction. In this study, an sEMG-based hand posture recognition algorithm was developed, considering three main problems: electrode shift, feature vectors, and posture groups. The sEMG signal was measured using an armband sensor with the electrode shift. An artificial neural network classifier was trained using 21 feature vectors for seven different posture groups. The inter-session and inter-feature Pearson correlation coefficients (PCCs) were calculated. The results indicate that the classification performance improved with the number of training sessions of the electrode shift. The number of sessions necessary for efficient training was four, and the feature vectors with a high inter-session PCC (r > 0.7) exhibited high classification accuracy. Similarities between postures in a posture group decreased the classification accuracy. Our results indicate that the classification accuracy could be improved with the addition of more electrode shift training sessions and that the PCC is useful for selecting the feature vector. Furthermore, hand posture selection was as important as feature vector selection. These findings will help in optimizing the sEMG-based pattern recognition algorithm more easily and quickly.

Highlights

  • Gestures, involving the physical movements of the hands, face, or body, is a form of communication used to convey meaningful information or interact with the environment [1]

  • The results indicate that the feature vectors with a strong linear relationship in inter-session Pearson correlation coefficients (PCCs) (r > 0.7) had a higher classification accuracy than that of the feature vectors with low inter-session PCC (r < 0.7), and these results were obtained for all training conditions and posture groups

  • This paper presented an Surface electromyography (sEMG)-based hand posture recognition algorithm using an armband sensor, considering the following three problems: electrode shift, feature vector selection, and postures selection

Read more

Summary

Introduction

Gestures, involving the physical movements of the hands, face, or body, is a form of communication used to convey meaningful information or interact with the environment [1]. Those typically applied in machine learning algorithms as the interface of human-computer interaction (HCI) are hand gestures. This is because they constitute the most natural and efficient movements in daily life [2]. As an HCI interface, a hand gesture recognition system has three advantages [3]. The second advantage is that a hand gesture recognition system can be applied as an alternative to overcome physical disabilities. The need for a gesture-based HCI interface is increasing, owing to the increasing number of people who can only communicate through hand gestures (e.g., sign language for the deaf)

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.