Abstract

Brief windows of vision presented during reaching movements contribute to endpoint error estimates. It is not clear whether such error detection processes depend on other sources of information (e.g., proprioception and efference). In the current study, participants were presented with a brief window of vision and then judged whether their movement endpoint under- or over-shot the target after: 1) performing an active reach; 2) being passively guided by a robotic arm; and 3) observing a fake hand moved by the robot arm. Participants were most accurate at estimating their endpoint error in the active movement conditions and least accurate in the action observation condition. Thus, both efferent and proprioceptive information significantly contribute to endpoint error detection processes even with brief visual feedback.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call