Abstract

Existing research on myoelectric control systems primarily focuses on extracting discriminative characteristics of the electromyographic (EMG) signal by designing handcrafted features. Recently, however, deep learning techniques have been applied to the challenging task of EMG-based gesture recognition. The adoption of these techniques slowly shifts the focus from feature engineering to feature learning. Nevertheless, the black-box nature of deep learning makes it hard to understand the type of information learned by the network and how it relates to handcrafted features. Additionally, due to the high variability in EMG recordings between participants, deep features tend to generalize poorly across subjects using standard training methods. Consequently, this work introduces a new multi-domain learning algorithm, named ADANN (Adaptive Domain Adversarial Neural Network), which significantly enhances (p = 0.00004) inter-subject classification accuracy by an average of 19.40% compared to standard training. Using ADANN-generated features, this work provides the first topological data analysis of EMG-based gesture recognition for the characterization of the information encoded within a deep network, using handcrafted features as landmarks. This analysis reveals that handcrafted features and the learned features (in the earlier layers) both try to discriminate between all gestures, but do not encode the same information to do so. In the later layers, the learned features are inclined to instead adopt a one-vs.-all strategy for a given class. Furthermore, by using convolutional network visualization techniques, it is revealed that learned features actually tend to ignore the most activated channel during contraction, which is in stark contrast with the prevalence of handcrafted features designed to capture amplitude information. Overall, this work paves the way for hybrid feature sets by providing a clear guideline of complementary information encoded within learned and handcrafted features.

Highlights

  • Surface Electromyography is a technique employed in a vast array of applications from assistive technologies (Phinyomark et al, 2011c; Scheme and Englehart, 2011) to bio-mechanical analysis (Andersen et al, 2018), and more generally as a way to interface with computers and robots (Zhang et al, 2009; St-Onge et al, 2019)

  • This paper presents the first in-depth analysis of features learned using deep learning for EMG-based hand gesture recognition

  • The type of information encoded within learned features and their relationship to handcrafted features were characterized employing a mixture of topological data analysis (Mapper), network interpretability visualization (Guided Grad-CAM), machine learning, and by visualizing the information flow using feature regression

Read more

Summary

Introduction

Surface Electromyography (sEMG) is a technique employed in a vast array of applications from assistive technologies (Phinyomark et al, 2011c; Scheme and Englehart, 2011) to bio-mechanical analysis (Andersen et al, 2018), and more generally as a way to interface with computers and robots (Zhang et al, 2009; St-Onge et al, 2019). Within the context of sEMG-based gesture recognition, deep learning was shown to be competitive with the current state of the art (Côté-Allard et al, 2019a) and when combined with handcrafted features, to outperform it (Chen et al, 2019). This last result seems to indicate that, for sEMG signals, deep-learned features provide useful information that may be complementary to those that have been engineered throughout the years. The black box nature of these deep networks means that understanding what type of information is encapsulated throughout the network, and how to leverage this information, is challenging

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call