Abstract

Current state-of-the-art myoelectric interfaces employ traditional pattern recognition (PR) algorithms to decode the Electromyogram (EMG) signals into hand movements for controlling artificial limbs. Recently, deep learning (DL) models have also been exploited for EMG feature learning/extraction. Models like Convolutional Neural Networks (CNN), which capture the spatial correlations, and Long Short-Term Memory (LSTM), which capture the non-linear temporal dynamics of EMG time-series data, have been shown to outperform traditional EMG PR systems. Nevertheless, the large number of model parameters, long training times, and large amounts of data required to train these DL models remain limiting factors that may hinder their translation into clinically viable prostheses. Consequently, rather than applying DL directly, this paper leverages concepts derived from these models to build upon our proposed concept of a Fusion of Time Domain Descriptors (FTDD). The FTDD are augmented with Range Spatial Filtering (RSF) to capture the spatial correlations and combined into an LSTM-style framework. This process, denoted as Recurrent Spatial-Temporal Fusion (RSTF), can be applied in combination with any traditional feature extraction method to exploit temporal and spatial correlations, with the potential for bi-directional applications. The advantages of the proposed RSTF method include (1) the memory concept, capturing long- and short-term spatial and temporal dependencies of the EMG signals, (2) significantly improved performance outperforming other state-of-the-art models and (3) the simplicity and the fairly low computational costs for feature extraction. Results are bench-marked against several feature extraction methods, proving the power of the RSTF using data from 82 subjects from five EMG databases with varying recording characteristics. The proposed method significantly outperforms all other methods tested for EMG pattern recognition, including a deep LSTM and other CNN methods previously reported in the literature and at a fracture of the computational cost. On the most challenging dataset, improvements of as much as 15% were found.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.