Abstract

Human-computer interaction(HCI) has a broad range of applications. Many HCI systems are based on bio-signal analysis and classification. The surface electromyographic(sEMG) signal is one of the most used signals that are formed by muscle activation although the details are rather complex. The applications of sEMG signals are referred to as myoelectric control since the dominant use of this signal is to activate a device even if (as the term control may imply) feedback is not always used in the process. With the development of deep neural networks, various deep learning architectures are used for sEMG-based gesture recognition with many researchers having reported good performance. Nevertheless, challenges remain in accurately recognizing sEMG patterns generated by gestures produced by the hand or the upper arm. For instance, one of the difficulties in hand gesture recognition is the influence of limb positions. Several papers have shown that the accuracy of gesture classification decreases when the limb position changes even if the gesture remains the same. Prior work by our team has shown that dynamic gesture recognition is in principle more reliable in detecting human intent, which is often the underlying idea of gesture recognition. In this paper, a Convolutional Neural Network (CNN) with Long Short-Term Memory or LSTM (CNN-LSTM) is proposed to classify five common dynamic gestures. Each dynamic gesture would be performed in five different limb positions as well. The trained neural network model with high recognition performances is then used to enable a human subject to control a robotic arm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call