Abstract

By being predicated on supervised machine learning, pattern recognition approaches to myoelectric prosthesis control require electromyography (EMG) training data collected concurrently with every detectable motion. Within this framework, calibration protocols for simultaneous control of multifunctional prosthetic hands rapidly become prohibitively long-the number of unique motions grows geometrically with the number of controllable degrees of freedom (DoFs). This paper proposes a technique intended to circumvent this combinatorial explosion. Using EMG windows from 1-DoF motions as input and EMG windows from 2-DoF motions as targets, we train generative deep learning models to synthesize EMG windows appertaining to multi-DoF motions. Once trained, such models can be used to complete datasets consisting of only 1-DoF motions, enabling simple calibration protocols with durations that scale linearly with the number of DoFs. We evaluated synthetic EMG produced in this way via a classification task using a database of forearm surface EMG collected during 1-DoF and 2-DoF motions. Multi-output classifiers were trained on either (I) real data from 1-DoF and 2-DoF motions, (II) real data from only 1-DoF motions, or (III) real data from 1-DoF motions appended with synthetic EMG from 2-DoF motions. When tested on data containing all possible motions, classifiers trained on synthetic-appended data (III) significantly outperformed classifiers trained on 1-DoF real data (II), although significantly underperformed classifiers trained on both 1- and 2-DoF real data (I) (I < 0.05). These findings suggest that it is feasible to model EMG concurrent with multiarticulate motions as nonlinear combinations of EMG from constituent 1-DoF motions, and that such modelling can be harnessed to synthesize realistic training data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call