Abstract

The surface electromyogram (sEMG) based hand gesture recognition is prevalent in human–computer interface systems. However, the generalization of the recognition model does not perform well on cross-subject and cross-day. Transfer learning, which applies the pretrained model to another task, has demonstrated its effectiveness in solving this kind of problem. In this regard, this article first proposes a multiscale kernel convolutional neural network (MKCNN) model to extract and fuse multiscale features of the multichannel sEMG signals. Based on the proposed MKCNN model, a transfer learning model named TL-MKCNN combines the MKCNN and its Siamese network by a custom distribution normalization module (DNM) and a distribution alignment module (DAM) to realize domain adaptation. The DNM can cluster the deep features extracted from different domains to their category center points embedded in the feature space, and the DAM further aligns the overall distribution of the deep features from different domains. The MKCNN model and the TL-MKCNN model are tested on various benchmark databases to verify the effectiveness of the transfer learning framework. The experimental results show that, on the benchmark database NinaPro DB6, the average accuracies of TL-MKCNN can achieve 97.22% on within-session, 74.48% on cross-subject, and 90.30% on cross-day, which are 4.31%, 11.58%, and 5.51% higher than those of the MKCNN model on within-session, cross-subject, and cross-day, respectively. Compared with the state-of-the-art works, the TL-MKCNN obtains 13.38% and 37.88% accuracy improvement on cross-subject and cross-day, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call