Motion perception is a critical function of the visual system. In a three-dimensional environment, multiple sensory cues carry information about an object's motion trajectory. Previous work has quantified the contribution of binocular motion cues, such as interocular velocity differences and changing disparities over time, as well as monocular motion cues, such as size and density changes. However, even when these cues are presented in concert, observers will systematically misreport the direction of motion-in-depth. Although in the majority of laboratory experiments head position is held fixed using a chin or head rest, an observer's head position is subject to involuntary small movements under real-world viewing conditions. Here, we considered the potential impact of such “head jitter” on motion-in-depth perception. We presented visual stimuli in a head-mounted virtual reality device that facilitated low latency head tracking and asked observers to judge 3D object motion. We found performance improved when we updated the visual display consistent with the small changes in head position. When we disrupted or delayed head movement–contingent updating of the visual display, the proportion of motion-in-depth misreports again increased, reflected in both a reduction in sensitivity and an increase in bias. Our findings identify a critical function of head jitter in visual motion perception, which has been obscured in most (head-fixed and non-head jitter contingent) laboratory experiments.
Read full abstract