Abstract

In this study, we aimed to develop a novel electromyography (EMG)-based neural machine interface (NMI), called the Neural Network-Musculoskeletal hybrid Model (N2M2), to decode continuous joint angles. Our approach combines the concepts of machine learning and musculoskeletal modeling. We compared our novel design with a musculoskeletal model (MM) and 2 continuous EMG decoders based on artificial neural networks (ANNs): multilayer perceptrons (MLPs) and nonlinear autoregressive neural networks with exogenous inputs (NARX networks). EMG and joint kinematics data were collected from 10 non-disabled and 1 transradial amputee subject. The offline performance tested across 3 different conditions (i.e., varied arm postures, shifted electrode locations, and noise-contaminated EMG signals) and online performance for a virtual postural matching task was quantified. Finally, we implemented the N2M2 to operate a prosthetic hand and tested functional task performance. The N2M2 made more accurate predictions than the MLP in all postures and electrode locations (p < 0.003). For estimated MCP joint angles, the N2M2 was less sensitive to noisy EMG signals than the MM or NARX network with respect to error (p < 0.032) as well as the NARX network with respect to correlation (p = 0.007). Additionally, the N2M2 had better online task performance than the NARX network (p ≤ 0.030). Overall, we have found that combining the concepts of machine learning and musculoskeletal modeling has resulted in a more robust joint kinematics decoder than either concept individually. The outcome of this study may result in a novel, highly reliable controller for powered prosthetic hands.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call