Abstract

Prosthetic devices need to be controlled by their users, typically using physiological signals. People tend to look at objects before reaching for them and we have shown that combining eye movements with other continuous physiological signal sources enhances control. This approach suffers when subjects also look at non-targets, a problem we addressed with a probabilistic mixture over targets where subject gaze information is used to identify target candidates. However, this approach would be ineffective if a user wanted to move towards targets that have not been foveated. Here we evaluated how the accuracy of prior target information influenced decoding accuracy, as the availability of neural control signals was varied. We also considered a mixture model where we assumed that the target may be foveated or, alternatively, that the target may not be foveated. We tested the accuracy of the models at decoding natural reaching data, and also in a closed-loop robot-assisted reaching task. The mixture model worked well in the face of high target uncertainty. Furthermore, errors due to inaccurate target information were reduced by including a generic model that relied on neural signals only.

Highlights

  • People will almost always look at an object before reaching for it [1], providing us with a rich source of information about their intended arm movements

  • We proposed an extension to the algorithm that accounted for the worst-case scenario where the target was not foveated by including a generic model with no target information into our mixture model, giving more control to the user’s neural signals when they indicated that none of the target estimates were likely to be correct

  • Offline Evaluation of Natural Reach Decoding with mKFT In our offline analysis of natural reaching, we found that the mKFT generally performed well in the face of target uncertainty

Read more

Summary

Introduction

People will almost always look at an object before reaching for it [1], providing us with a rich source of information about their intended arm movements. Such a means of decoding intent may be useful for a range of user interface applications [2], including the restoration of communication or movement to people whose arms have been paralyzed. If the purpose of the interface is to restore movement with a neuroprosthesis it is vital that saccades away from an intended reach target do not generate unintentional commands. Gaze may be an extremely useful control signal, but the user interface must be able to safely deal with the associated uncertainty

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.