Abstract

Everyday actions like moving the head, walking around, and grasping objects are typically self-controlled. This presents a problem when studying the signals encoding such actions because active self-movement is difficult to control experimentally. Available techniques demand repeatable trials, but each action is unique, making it difficult to measure fundamental properties like psychophysical thresholds. We present a novel paradigm that recovers both precision and bias of self-movement signals with minimal constraint on the participant. The paradigm relies on linking image motion to previous self-movement, and two experimental phases to extract the signal encoding the latter. The paradigm takes care of a hidden source of external noise not previously accounted for in techniques that link display motion to self-movement in real time (e.g., virtual reality). We use head rotations as an example of self-movement, and show that the precision of the signals encoding head movement depends on whether they are being used to judge visual motion or auditory motion. We find that perceived motion is slowed during head movement in both cases. The "nonimage" signals encoding active head rotation (motor commands, proprioception, and vestibular cues) are therefore biased toward lower speeds and/or displacements. In a second experiment, we trained participants to rotate their heads at different rates and found that the imprecision of the head rotation signal rises proportionally with head speed (Weber's law). We discuss the findings in terms of the different motion cues used by vision and hearing, and the implications they have for Bayesian models of motion perception.NEW & NOTEWORTHY We present a psychophysical technique for measuring the precision of signals encoding active self-movements. Using head movements, we show that 1) precision is greater when active head rotation is performed using visual comparison stimuli versus auditory; 2) precision decreases with head speed (Weber's law); 3) perceived speed is lower during head rotation. The findings may reflect the steps needed to convert different cues into common units, and challenge standard Bayesian models of motion perception.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.