Abstract

How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.

Highlights

  • In Experiments 2 and 3, we further used our method to examine the tuning of flow parsing to self-motion and object motion speed. This was to characterize how the accuracy of flow parsing depends on self-motion and object motion speed. The findings of these experiments would help develop a more detailed understanding of how flow parsing makes use of various types of local motion information as well as how the process is tuned to self-motion and object motion speed and would inform computational and neural models of flow parsing (e.g., Layton & Fajen, 2016) as well as provide the data required for validating these models

  • It was significantly lower in the hemifield condition (31.3 Æ 3.7%) than in the no local frontal view condition (p 1⁄4 .0002). This suggests that local motion information in the retinal vicinity of the probe object plays a significant role in flow parsing

  • The flow parsing gains were significantly below 100%. This indicates that rich visual information about self-motion and the layout of the scene is not sufficient to enable the precise perception of scene-relative object motion during forward self-motion

Read more

Summary

Introduction

It has long ago been proposed that the coherent large pattern of optical flow, normally generated by movements of the observer, specifies how one has just moved (Gibson, 1958). Deviations from this global flow pattern signal independent object motion (Gibson, 1954). To explain the underlying perceptual process for object motion perception during self-motion, Rushton and Warren (2005) proposed the flow parsing hypothesis. The visual system uses retinal flow to determine what component of retinal motion is due to self-motion It globally parses out this component and leaves the observer with a percept of scene-relative object motion

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call