Abstract
The visual system provides a representation of what and where objects are. This entails parsing the visual scene into distinct objects. Initially, the visual system encodes information locally. While interactions between adjacent cells can explain how local fragments of an objectÂ’s contour are extracted from a scene, more global mechanisms have to be able to integrate information beyond that of neighbouring cells to allow for the representation of extended objects. This talk will examine the nature of intermediate-level computations in the transformation from discrete local sampling to the representation of complex objects. Several paradigms were invoked to study how information concerning the position and orientation of local signals is combined: a shape discrimination task requiring observers to discriminate between contours; a shape coherence task measuring the number of elements required to detect a contour; a shape illusion in which positional and orientational information is combined inappropriately. Results support the notion of mechanisms that integrate information beyond that of neighbouring cells and are optimally tuned to a range of different contour shapes. Global integration is not restricted to central vision: peripheral data show that certain aspects of this process only emerge at intermediate stages. Moreover, intermediate processing appears vulnerable to damage. Diverse clinical populations (migraineurs, pre-term children and children with Cortical Visual Impairment) show specific deficits for these tasks that cannot be accounted for by low-level processes. Taken together, evidence is converging towards the identification of an intermediate level of processing, at which sensitivity to global shape attributes emerges. Meeting abstract presented at VSS 2014
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have