Abstract

A number of movement intent decoders exist in the literature that typically differ in the algorithms used and the nature of the outputs generated. Each approach comes with its own advantages and disadvantages. Combining the estimates of multiple algorithms may have better performance than any of the individual methods. This paper presents and evaluates a shared controller framework for prosthetic limbs based on multiple decoders of volitional movement intent. An algorithm to combine multiple estimates to control the prosthesis is developed in this paper. The capabilities of the approach are validated using a system that combines a Kalman filter-based decoder with a multilayer perceptron classifier-based decoder. The shared controller's performance is validated in online experiments where a virtual limb is controlled in real-time by amputee and intact-arm subjects. During the testing phase subjects controlled a virtual hand in real time to move digits to instructed positions using either a Kalman filter decoder, a multilayer perceptron decoder, or a linear combination of the two. The shared controller results in statistically significant improvements over the component decoders. Specifically, certain degrees of shared control result in increases in the time-in-target metric and decreases in unintended movements. The shared controller of this paper combines the good qualities of component decoders tested in this paper. Herein, combining a Kalman filter decoder with a classifier-based decoder inherits the flexibility of the Kalman filter decoder and the limited unwanted movements from the classifier-based decoder, resulting in a system that may be able to perform the tasks of everyday life more naturally and reliably.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call