Abstract

A classification framework for hand gestures using Electromyography (EMG) signals in prosthetic hands is presented. Leveraging the multi-scale characteristics and temporal nature of EMG signals, a Convolutional Neural Network (CNN) is used to extract multi-scale features and classify them with spatial-temporal attention. A multi-scale coarse-grained layer introduced into the input of one-dimensional CNN (1D-CNN) facilitates multi-scale feature extraction. The multi-scale features are fed into the attention layer and subsequently given to the fully connected layer to perform classification. The proposed model achieves classification accuracies of 93.4%, 92.8%, 91.3%, and 94.1% for Ninapro DB1, DB2, DB5, and DB7 respectively, thereby enhancing the confidence of prosthetic hand users.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call