Abstract

sEMG Gesture Recognition is often used in wearable devices, so balancing model complexity and classification ability are important when designing models. Presently, methods employing large complex deep learning models can achieve high accuracy, but they are too computationally intensive to be deployed on wearable devices. In this study, we propose an Multilayer Perceptron (MLP) combining max pooling network (MCMP-Net), this model is lightweight and has a strong classification ability. Data in each frame is projected respectively into a latent space via MLP, and then max pooled as features to do classification, we also designed a multi-head mechanism to improve classification ability and a completion embedding mechanism to add sequential information. We evaluate the model using seven benchmark datasets: NinaPro DB1, DB2, DB4, DB5, and CapgMyo DB-a, DB-b, and DB-c, the accuracies are 91.8 %, 84.5 %, 85.5 %, 93.1 %, 90.5 %, 90.3 %, 94.5 %, respectively. Compared to state-of-the-art methods, the results show that our model is competitive in accuracy, furthermore, it is much computationally-efficient and easier to arrange in wearable devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call