Abstract
Discerning objects from their surrounds (i.e., figure-ground segmentation) in a way that guides adaptive behaviors is a fundamental task of the brain. Neurophysiological work has revealed a class of cells in the macaque visual cortex that may be ideally suited to support this neural computation: border ownership cells (Zhou H, Friedman HS, von der Heydt R. J Neurosci 20: 6594–6611, 2000). These orientation-tuned cells appear to respond conditionally to the borders of objects. A behavioral correlate supporting the existence of these cells in humans was demonstrated with two-dimensional luminance-defined objects (von der Heydt R, Macuda T, Qiu FT. J Opt Soc Am A Opt Image Sci Vis 22: 2222–2229, 2005). However, objects in our natural visual environments are often signaled by complex cues, such as motion and binocular disparity. Thus for border ownership systems to effectively support figure-ground segmentation and object depth ordering, they must have access to information from multiple depth cues with strict depth order selectivity. Here we measured in humans (of both sexes) border ownership-dependent tilt aftereffects after adaptation to figures defined by either motion parallax or binocular disparity. We find that both depth cues produce a tilt aftereffect that is selective for figure-ground depth order. Furthermore, we find that the effects of adaptation are transferable between cues, suggesting that these systems may combine depth cues to reduce uncertainty (Bülthoff HH, Mallot HA. J Opt Soc Am A 5: 1749–1758, 1988). These results suggest that border ownership mechanisms have strict depth order selectivity and access to multiple depth cues that are jointly encoded, providing compelling psychophysical support for their role in figure-ground segmentation in natural visual environments.NEW & NOTEWORTHY Figure-ground segmentation is a critical function that may be supported by “border ownership” neural systems that conditionally respond to object borders. We measured border ownership-dependent tilt aftereffects to figures defined by motion parallax or binocular disparity and found aftereffects for both cues. These effects were transferable between cues but selective for figure-ground depth order, suggesting that the neural systems supporting figure-ground segmentation have strict depth order selectivity and access to multiple depth cues that are jointly encoded.
Highlights
Our natural visual environments are complex and often cluttered with objects
We further investigated whether any adaptation effect is transferable between depth order configurations
These results indicate that border ownership cells are sensitive to both motion parallax and binocular disparity and that they show depth order preference
Summary
To interact with our surrounds appropriately, objects must be segmented from other objects and their backgrounds, and their position in depth must be inferred from often fragmented and ambiguous cues such as binocular disparity, motion parallax, and texture Achieving such so-called “figure-ground segmentation” with the speed and automaticity necessary to effectively function within the environment is nontrivial; understanding how the brain accomplishes this remains of fundamental importance in neuroscience. The same light-dark edge presented within the receptive field of a neuron could produce a larger increase in firing rate if the light region was part of a distinct “figure” positioned on a dark “background” than vice versa (Fig. 1A) This contingency is active even when the object extends far beyond the classic receptive field of the cell, suggesting that border ownership cells are connected through a network that can identify borders that are common to an object. This distinctive characteristic is ideally suited for binding the borders of an object in order to segment it from the background and facilitate other figure-ground mechanisms to retrieve those borders as a whole
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.