Abstract
Standard point-light biological motion stimuli do not specify disparity information, inducing depth-ambiguities for specific views of the walker (Vanrie, Dekeyser, Verfaillie, 2004). In these cases perception becomes multi-stable, and the same stimulus can be perceived as a walker heading in two alternative directions (Vangeneugden et al. 2011). Existing neural and computational theories for biological motion perception are either based on learned 2D templates (e.g. Giese & Poggio, 2003; Lange & Lappe, 2006; Serre & Poggio, 2007) or the online fitting of 3D body models to image features (e.g. Marr & Vaina, 1982). The question arises whether such models can account for such multi-stable perception of the threedimensional body structure from motion, and which predictions at the level of single cell and population activity follow from these models. We present an extension of a physiologicallyinspired dynamical neural model for the processing of body motion (Giese & Poggio, 2003), which accounts for such multi-stable perception by dynamically competing view-specific neural representations. The model reproduces qualitatively several key aspects from psychophysical experiments investigating such perceptual multi-stabilities in biological motion perception.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.