Abstract

Many upper-limb prostheses lack proper wrist rotation functionality, leading to users performing poor compensatory strategies, leading to overuse or abandonment. In this study, we investigate the validity of creating and implementing a data-driven predictive control strategy in object grasping tasks performed in virtual reality. We propose the idea of using gaze-centered vision to predict the wrist rotations of a user and implement a user study to investigate the impact of using this predictive control. We demonstrate that using this vision-based predictive system leads to a decrease in compensatory movement in the shoulder, as well as task completion time. We discuss the cases in which the virtual prosthesis with the predictive model implemented did and did not make a physical improvement in various arm movements. We also discuss the cognitive value in implementing such predictive control strategies into prosthetic controllers. We find that gaze-centered vision provides information about the intent of the user when performing object reaching and that the performance of prosthetic hands improves greatly when wrist prediction is implemented. Lastly, we address the limitations of this study in the context of both the study itself as well as any future physical implementations.

Highlights

  • When reaching for an object, humans rely on their vision to guide their reach towards the object

  • We report the performance of the algorithm as a model and compare the impact that the model has when implemented in a virtual prosthesis

  • A model to predict wrist rotations from eye-tracked gaze was trained from a dataset collected from 6 subjects and evaluated with a user study

Read more

Summary

Introduction

When reaching for an object, humans rely on their vision to guide their reach towards the object. This visual information informs the human about the trajectory of their reach and the pose of their hand before making contact. While recent techniques in computer vision facilitate data-driven mapping from visual information to hand grasps, such advances have yet to be implemented for prosthetic wrist control. Upper-limb prosthetic devices tend to fall short in the effectiveness of the wrist rotation, and users have to perform compensatory strategies with the shoulder to make up for these shortcomings. We use data-driven methods to create an algorithm for predictive wrist kinematic control from vision. A. Vision in grasping with prosthetic limbs

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.