Abstract

Surface electromyography (sEMG) armband-based gesture recognition is an active research topic that aims to identify hand gestures with a single row of sEMG electrodes. As a typical type of biological signal, sEMG on one channel is nonstationary temporally and related to multiple adjacent muscles spatially, which hinders the effective representation in gesture recognition. To tackle these aspects, we propose a spatial–temporal features-based gesture recognition method (STF-GR) in this article. Specifically, STF-GR first decomposes the nonstationary multichannel sEMG by multivariate empirical mode decomposition, which jointly transforms each channel into a series of stationary subsignals. It can keep the temporal stationarity within-channel as well as the spatial independence across-channel. Then, by the convolutional recurrent neural network, STF-GR extracts and merges spatial–temporal features of decomposed sEMG signal. Finally, a negative log-likelihood-based cost function is used to make the final gesture decision. To evaluate the performance of STF-GR, we conduct experiments on three data sets, noninvasive adaptive hand prosthetic (NinaPro), CapgMyo, and BandMyo. The first two are publicly available, and BandMyo is collected by ourselves. Experimental evaluations with within-subject tests show that STF-GR exceeds the performance of other state-of-the-art methods, including deep learning algorithms that are not focused on spatial–temporal features and traditional machine learning algorithms that use handcrafted features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call