Abstract

The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which were different from those of V1 neurons. While V1 neurons in general preferred one orientation, one subpopulation of V2 neurons (“transient”) showed a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responded similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforced these distinctions: the dynamics enhanced the preference of V1 neurons for continuous orientations and the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.

Highlights

  • A fundamental step in analyzing visual scenes is to find object boundaries

  • Our results identify two subpopulations of orientation-selective neurons in area V2 that process orientation signals in a complementary fashion

  • The sustained V2 neurons integrate orientation signals over space and time. Their responses can be understood as integration of outputs of V1 receptive fields with similar orientation tuning over an extended period in time

Read more

Summary

Introduction

A fundamental step in analyzing visual scenes is to find object boundaries. In natural images, some boundaries are defined by luminance differences; others are defined by texture differences. Since the larger receptive fields of V2 are produced by combining the output of V1 neurons (Foster et al, 1985; Levitt et al, 1994; Smith et al, 2007), the extraction of texture boundaries by V2 receptive fields must involve computations on its V1 inputs across space. These computations must accomplish a specific goal – extraction of texture boundaries – while preserving the luminance-boundary information already extracted by V1. The extraction of boundaries from the retinal image serves as an excellent model to reveal how cortical areas interact to carry out sensory processing

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call