Abstract

Neural prostheses decode intention from cortical activity to restore upper extremity movement. Typical decoding algorithms extract velocity—a vector quantity with direction and magnitude (speed) —from neuronal firing rates. Standard decoding algorithms accurately recover arm direction, but the extraction of speed has proven more difficult. We show that this difficulty is due to the way speed is encoded by individual neurons and demonstrate how standard encoding-decoding procedures produce characteristic errors. These problems are addressed using alternative brain–computer interface (BCI) algorithms that accommodate nonlinear encoding of speed and direction. Our BCI approach leads to skillful control of both direction and speed as demonstrated by stereotypic bell-shaped speed profiles, straight trajectories, and steady cursor positions before and after the movement.

Highlights

  • Neural prostheses decode intention from cortical activity to restore upper extremity movement

  • Typical decoders work by inverting encoding models fit to the recorded firing rates

  • Since the classic population vector is constructed with empirical firing rates, this speeddirection interaction is captured as a change in the length of the resultant vector, even though the “direction-only” encoding model used in this decoder has no explicit terms for speed

Read more

Summary

Introduction

Neural prostheses decode intention from cortical activity to restore upper extremity movement. We can show how standard decoders perform by simulating firing rates using the gain-only and offset speed encoding models. Movement trajectories were decoded using population vectors derived from the minimal OLE decoder (which operated on firing rates using the direction-only encoding model).

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call