Abstract

The applications of myoelectrical interfaces are majorly limited by the efficacy of decoding motion intent in electromyographic (EMG) signal. Currently, EMG classification methods often rely substantially on hand-crafted features, or ignore key channel and inter-feature information for classification tasks. To address these issues, a multi-scale feature extraction network (MSFEnet) based on channel-spatial attention is proposed to decode EMG signal for the task of gesture recognition classification. Specifically, we fuse the spatio-temporal characteristics of EMG signal with different scales. Then, we construct the feature channel attention module and the feature spatial attention module to capture more key channels features, and more key spatial features. To evaluate the efficacy of proposed method, extensive experiments are conducted on two public datasets: Ninapro DB2 and CapgMyo DB-a. An average accuracy of 86.21%, 90.77%, 92.53% and 98.85% has been achieved in Exercise B, C, D of Ninapro DB2 and CapgMyo DB-a, respectively. The experimental results demonstrate that MSFEnet is more capable of extracting temporal and spatial fused features. It performs well in generalization and has higher classification accuracy compared with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call