Millions of individuals are living with upper extremity amputations, making them potential beneficiaries of hand and arm prostheses. While myoelectric prostheses have evolved to meet amputees’ needs, challenges remain related to their control. This research leverages surface electromyography sensors and machine learning techniques to classify five fundamental hand gestures. By utilizing features extracted from electromyography data, we employed a nonlinear, multiple-kernel learning-based support vector machine classifier for gesture recognition. Our dataset encompassed eight young nondisabled participants. Additionally, our study conducted a comparative analysis of five distinct sensor placement configurations. These configurations capture electromyography data associated with index finger and thumb movements, as well as index finger and ring finger movements. We also compared four different classifiers to determine the most capable one to classify hand gestures. The dual-sensor setup strategically placed to capture thumb and index finger movements was the most effective—this dual-sensor setup achieved 90% accuracy for classifying all five gestures using the support vector machine classifier. Furthermore, the application of multiple-kernel learning within the support vector machine classifier showcases its efficacy, achieving the highest classification accuracy amongst all classifiers. This study showcased the potential of surface electromyography sensors and machine learning in enhancing the control and functionality of myoelectric prostheses for individuals with upper extremity amputations.
Read full abstract