P170 Neural model for the recognition of agency and social interaction from abstract stimuliMohammad Hovaidi Ardestani1, Martin Giese2, Nitin Saini2 1University Clinic Tübingen, Tübingen, Germany; 2Center for Integrative Neuroscience & University Clinic Tübingen, Dept of Cogniitive Neurology, Germany Correspondence: Martin Giese (martin.giese@uni-tuebingen.de)BMC Neuroscience 2018, 19(Suppl 2):P170Introduction: Humans derive spontaneously judgements about agency and social interactions from strongly impoverished stimuli, as impressively demonstrated by the seminal work by Heider and Simmel (1944). The neural circuits that derive such judgements from image sequences are entirely unknown. It has been hypothesized that this visual function is based on high-level cognitive processes, such as probabilistic reasoning. Taking an alternative approach, we show that such functions can be accomplished by relatively elementary neural networks that can be implemented by simple physiologically plausible neural mechanisms, exploiting an appropriately structure hierarchical (deep) neural model of the visual pathway.Methods: Extending classical biologically-inspired models for object and action perception (Riesenhuber & Poggio 1999; Giese & Poggio 2003) by a front-end that exploits deep learning for the construction of low and mid-level feature detectors, we built a hierarchical neural model that reproduces elementary psychophysical results on animacy and social perception from abstract stimuli. The lower hierarchy levels of the model consist of position-variant neural feature detectors that extract orientation and intermediately complex shape features. The next-higher level is formed by shape-selective neurons that are not completely position-invariant, which extract the 2D positions and orientation of moving agents. A second pathway extracts the 2D motion of the moving agents. Exploiting a gain-field network, we compute the relative positions of the moving agents. The top layers of the model combine the mentioned features into more complex high-level features that represent the speed, smoothness of motion and spatial relationships of the moving agents. The highest level of the model consists of neurons that have learned to classify the agency of the motions, and different categories of social interactions.Results: Based on input video sequences, the model successfully reproduces results of Tremoulet and Feldman (2000) on the dependence of perceived animacy on motion parameters, and its dependence on the alignment of motion and body axis (Hernik et al. 2013). In addition, the model correctly classifies four categories of social interactions that have been frequently tested in the psychophysical literature (following, chasing, fighting, guarding) (e.g. Scholl and McCarthy, 2012; McAleer et al., 2011a).Conclusion: Using simple physiologically plausible neural circuits, the model accounts simultaneously for a variety of effects related to animacy and social interaction perception. This leads to interesting predictions about neurons involved in the visual processing of such stimuli. Acknowledgement This work was supported by: HFSP RGP0036/2016; the European Commission HBP FP7-ICT2013-FET-F/604102 and COGIMON H2020-644727, and the DFG GZ: GI 305/4-1 and KA 1258/15-1.
Read full abstract