Abstract

Vision provides a number of cues about the three-dimensional (3D) layout of objects in a scene that could be used for planning and controlling goal-directed behaviors such as pointing, grasping, and placing objects. An emerging consensus from the perceptual work is that the visual brain is a near-optimal Bayesian estimator of object properties, for example, by integrating cues in a way that accounts for differences in their reliability. We measured how the visuomotor system integrates binocular and monocular cues to 3D surface orientation to guide the placement of objects on a slanted surface. Subjects showed qualitatively similar results to those found in perceptual studies--they gave more weight to binocular cues at low slants and more weight to monocular cues like texture at high slants. We compared subjects' performance in the visuomotor task with their performance on matched perceptual tasks that required an observer to estimate the same 3D surface properties needed to control the motor behavior. The relative influence of binocular and monocular cues changed in qualitatively the same way across stimulus conditions in the two types of task; however, subjects gave significantly more weight to binocular cues for controlling hand movements than for making explicit perceptual judgments in these tasks. Thus, the brain changes how it integrates visual cues based not only on the information content of stimuli, but also on the task for which the information is used.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call