Abstract

Flow parsing is a way to estimate the direction of scene-relative motion of independently moving objects during self-motion of the observer. So far, this has been tested for simple geometric shapes such as dots or bars. Whether further cues such as prior knowledge about typical directions of an object’s movement, e.g., typical human motion, are considered in the estimations is currently unclear. Here, we adjudicated between the theory that the direction of scene-relative motion of humans is estimated exclusively by flow parsing, just like for simple geometric objects, and the theory that prior knowledge about biological motion affects estimation of perceived direction of scene-relative motion of humans. We placed a human point-light walker in optic flow fields that simulated forward motion of the observer. We introduced conflicts between biological features of the walker (i.e., facing and articulation) and the direction of scene-relative motion. We investigated whether perceived direction of scene-relative motion was biased towards biological features and compared the results to perceived direction of scene-relative motion of scrambled walkers and dot clouds. We found that for humans the perceived direction of scene-relative motion was biased towards biological features. Additionally, we found larger flow parsing gain for humans compared to the other walker types. This indicates that flow parsing is not the only visual mechanism relevant for estimating the direction of scene-relative motion of independently moving objects during self-motion: observers also rely on prior knowledge about typical object motion, such as typical facing and articulation of humans.

Highlights

  • Extracting independent object motion in a scene during selfmotion is a challenge for the visual system: any motion on the retina might be due to either the self-motion of the observer, the motion of objects in the scene, or some combination of both sources of motion (Wallach, 1987)

  • The theory of flow parsing (Rushton & Warren, 2005) proposes that the visual system extracts scene-relative object motion by using optic flow analysis to “subtract” the retinal motion component that is due to self-motion from the full retinal motion field

  • Form and position cues in the flow field did not contribute to flow parsing. These findings indicated that visual cues other than optic flow were irrelevant for estimating motion profiles of independently moving objects during self-motion

Read more

Summary

Introduction

Extracting independent object motion in a scene during selfmotion is a challenge for the visual system: any motion on the retina might be due to either the self-motion of the observer, the motion of objects in the scene, or some combination of both sources of motion (Wallach, 1987). Studies either presented optic flow fields that simulated motion of an observer (Foulkes et al, 2013; Niehorster & Li, 2017; Rogers et al, 2017; Rushton & Warren, 2005; Rushton et al, 2018; Vaina et al, 2014; Warren & Rushton, 2007, 2008, 2009a, 2009b, b) or the observer physically moved (Dokka et al, 2015b, 2015a; Dupin & Wexler, 2013; Fajen et al, 2013b; Fajen & Matthis, 2013a). Observers performed tasks that involved detection of the motion of the target object (e.g., Rushton & Warren, 2005), judging its direction of scene-relative motion (e.g., Warren & Rushton, 2007) or estimating whether collision with the object was immanent (Fajen & Matthis, 2013a)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call