Abstract
Sensory feedback is critical in fine motor control, learning, and adaptation. However, robotic prosthetic limbs currently lack the feedback segment of the communication loop between user and device. Sensory substitution feedback can close this gap, but sometimes this improvement only persists when users cannot see their prosthesis, suggesting the provided feedback is redundant with vision. Thus, given the choice, users rely on vision over artificial feedback. To effectively augment vision, sensory feedback must provide information that vision cannot provide or provides poorly. Although vision is known to be less precise at estimating speed than position, no work has compared speed precision of biomimetic arm movements. In this study, we investigated the uncertainty of visual speed estimates as defined by different virtual arm movements. We found that uncertainty was greatest for visual estimates of joint speeds, compared to absolute rotational or linear endpoint speeds. Furthermore, this uncertainty increased when the joint reference frame speed varied over time, potentially caused by an overestimation of joint speed. Finally, we demonstrate a joint-based sensory substitution feedback paradigm capable of significantly reducing joint speed uncertainty when paired with vision. Ultimately, this work may lead to improved prosthesis control and capacity for motor learning.
Highlights
Both communication paths are represented by a corresponding internal model
Our results suggest vision is most uncertain about joint speed observations, and augmenting joint speed with artificial sensory feedback should yield the greatest improvement in precision
In the context of providing feedback for prosthetic limbs, our results suggest that providing joint speed feedback will yield the largest improvement to artificial proprioception when users are able to see the prosthesis
Summary
Both communication paths are represented by a corresponding internal model. Forward internal models predict future limb movements taking into account the limb’s current configuration and descending signals, while inverse internal models predict the motor command resulting in the limb’s current movement[3]. Adapt, and improve control of the limb over time, these models require knowledge of efferent motor commands (i.e. efference copy) and of the limb’s current configuration and movement (i.e. proprioception). Lack of these proprioceptive signals hampers internal model development and is detrimental to limb control, especially inter-joint coordination[4,5]. Able-bodied gaze behavior preempts limb movement with eye saccade towards the object of interest[21], but prosthesis user gaze tends to track the movement of their prosthesis until it reaches the target[22] This visual monitoring serves to replace the missing proprioception. There are several definitions of speed relevant to the movement of a limb
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.