Abstract

When we move, the visual direction of objects in the environment can change substantially. Compared with our understanding of depth perception, the problem the visual system faces in computing this change is relatively poorly understood. Here, we tested the extent to which participants’ judgments of visual direction could be predicted by standard cue combination rules. Participants were tested in virtual reality using a head-mounted display. In a simulated room, they judged the position of an object at one location, before walking to another location in the room and judging, in a second interval, whether an object was at the expected visual direction of the first. By manipulating the scale of the room across intervals, which was subjectively invisible to observers, we put two classes of cue into conflict, one that depends only on visual information and one that uses proprioceptive information to scale any reconstruction of the scene. We find that the sensitivity to changes in one class of cue while keeping the other constant provides a good prediction of performance when both cues vary, consistent with the standard cue combination framework. Nevertheless, by comparing judgments of visual direction with those of distance, we show that judgments of visual direction and distance are mutually inconsistent. We discuss why there is no need for any contradiction between these two conclusions.

Highlights

  • Three-dimensional representation in a moving observerThe coordinates of three-dimensional (3D) vision can seem misleadingly simple

  • Most research in the field of 3D vision focuses on the cues that contribute to the estimation of distance and depth presumably because, for a static observer, such as is typical for most psychophysical experiments, the estimation of visual direction seems simpler and less to do with the representation of the 3D world around us

  • We have demonstrated how these cues could be combined according to the widely accepted rules of cue combination (Glennerster et al, 2006; Rauschecker et al, 2006; Svarverud et al, 2012; Svarverud et al, 2010). In these instances we have always examined them in relation to the perception of depth. We extend this analysis to the perception of visual direction and, to anticipate our results, we show again that performance in combined cue situations for visual direction follows the pattern expected by standard cue combination rules, just as it did for perceived depth

Read more

Summary

Introduction

Three-dimensional representation in a moving observerThe coordinates of three-dimensional (3D) vision can seem misleadingly simple. There is good evidence that people are able to update their estimate of the visual direction of previously viewed objects when they move to a new location (Foo et al, 2005; Klatzky et al, 2003; Klier et al, 2008; Loomis et al, 1998; Medendorp, 2011; Rieser & Rider, 1991; Siegle et al, 2009; Thompson et al, 2004) To do this accurately requires two things: first, an estimate of the translation of the observer, which may come from a range of cues in addition to vision, including audition, proprioception, and somatosensory information, all of which must be integrated together (Mayne, 1974; Siegle et al, 2009); second, it requires an ability to use this information appropriately to update the observer’s representation of the scene and the observer’s location in it (whatever form that representation might take). Loomis et al describe this as updating a “spatial image” (Giudice et al, 2011; Loomis et al, 2007)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call