Abstract

BackgroundAdvanced prostheses can restore function and improve quality of life for individuals with amputations. Unfortunately, most commercial control strategies do not fully utilize the rich control information from residual nerves and musculature. Continuous decoders can provide more intuitive prosthesis control using multi-channel neural or electromyographic recordings. Three components influence continuous decoder performance: the data used to train the algorithm, the algorithm, and smoothing filters on the algorithm’s output. Individual groups often focus on a single decoder, so very few studies compare different decoders using otherwise similar experimental conditions.MethodsWe completed a two-phase, head-to-head comparison of 12 continuous decoders using activities of daily living. In phase one, we compared two training types and a smoothing filter with three algorithms (modified Kalman filter, multi-layer perceptron, and convolutional neural network) in a clothespin relocation task. We compared training types that included only individual digit and wrist movements vs. combination movements (e.g., simultaneous grasp and wrist flexion). We also compared raw vs. nonlinearly smoothed algorithm outputs. In phase two, we compared the three algorithms in fragile egg, zipping, pouring, and folding tasks using the combination training and smoothing found beneficial in phase one. In both phases, we collected objective, performance-based (e.g., success rate), and subjective, user-focused (e.g., preference) measures.ResultsPhase one showed that combination training improved prosthesis control accuracy and speed, and that the nonlinear smoothing improved accuracy but generally reduced speed. Phase one importantly showed simultaneous movements were used in the task, and that the modified Kalman filter and multi-layer perceptron predicted more simultaneous movements than the convolutional neural network. In phase two, user-focused metrics favored the convolutional neural network and modified Kalman filter, whereas performance-based metrics were generally similar among all algorithms.ConclusionsThese results confirm that state-of-the-art algorithms, whether linear or nonlinear in nature, functionally benefit from training on more complex data and from output smoothing. These studies will be used to select a decoder for a long-term take-home trial with implanted neuromyoelectric devices. Overall, clinical considerations may favor the mKF as it is similar in performance, faster to train, and computationally less expensive than neural networks.

Highlights

  • Advanced prostheses can restore function and improve quality of life for individuals with amputations

  • We compare training an algorithm with individual finger or wrist movements vs. combination movements and compare the effect of using a nonlinear smoothing filter [49]. We study these conditions in a two-degree-of-freedom decode using advanced, previously published, continuous control decoders: a modified Kalman filter [36], an Multi-layer perceptron (MLP) [29], and a Convolutional neural network (CNN) [30]

  • Training with combinations significantly reduced the median time needed to transfer a clothespin for CNN and MLP from 8.3 to 6.7 s (p < 0.05) and 11.8 to 7.9 s (p < 0.001), respectively (Fig. 4e)

Read more

Summary

Introduction

Advanced prostheses can restore function and improve quality of life for individuals with amputations. Using more neuromuscular inputs can provide more intuitive prosthesis control through richer, more diverse data that can be classified into different movements based on residual neuromuscular activation patterns. Several groups have demonstrated intuitive prosthesis control with decoders that classify distinct movements [15,16,17,18,19,20,21,22]. These decoders classify EMG patterns to several pre-determined grip patterns (e.g., close hand, open hand, pinch). Classifiers can be limited by allowing the user only a predetermined, fixed number of movement types

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call