Abstract
When probed with complex stimuli that extend beyond their classical receptive field, neurons in primary visual cortex display complex and non-linear response characteristics. Sparse coding models reproduce some of the observed contextual effects, but still fail to provide a satisfactory explanation in terms of realistic neural structures and cortical mechanisms, since the connection scheme they propose consists only of interactions among neurons with overlapping input fields. Here we propose an extended generative model for visual scenes that includes spatial dependencies among different features. We derive a neurophysiologically realistic inference scheme under the constraint that neurons have direct access only to local image information. The scheme can be interpreted as a network in primary visual cortex where two neural populations are organized in different layers within orientation hypercolumns that are connected by local, short-range and long-range recurrent interactions. When trained with natural images, the model predicts a connectivity structure linking neurons with similar orientation preferences matching the typical patterns found for long-ranging horizontal axons and feedback projections in visual cortex. Subjected to contextual stimuli typically used in empirical studies, our model replicates several hallmark effects of contextual processing and predicts characteristic differences for surround modulation between the two model populations. In summary, our model provides a novel framework for contextual processing in the visual system proposing a well-defined functional role for horizontal axons and feedback projections.
Highlights
Single neurons in the early visual system have direct access to only a small part of a visual scene, which manifests in their ‘classical’ receptive field being localized in visual space
An influential hypothesis about how the brain processes visual information posits that each given stimulus should be efficiently encoded using only a small number of cells. This idea led to the development of a class of models that provided a functional explanation for various response properties of visual neurons, including the non-linear modulations observed when localized stimuli are placed in a broader spatial context
It remains to be clarified through which anatomical structures and neural connectivities a network in the cortex could perform the computations that these models require
Summary
Single neurons in the early visual system have direct access to only a small part of a visual scene, which manifests in their ‘classical’ receptive field (cRF) being localized in visual space. For understanding how the brain forms coherent representations of spatially extended components or more complex objects in our environment, one needs to understand how neurons integrate local with contextual information represented in neighboring cells Such integration processes already become apparent in primary visual cortex, where spatial and temporal context strongly modulate a cell’s response to a visual stimulus inside the cRF. A further example [7] found strong cross-orientation facilitation, while [8] reports only moderate levels of cross-orientation facilitation, if at all These discrepancies might be rooted in differences between the experimental setups, such as the particular choice of center/surround stimulus sizes, contrasts, and other parameters like the spatial frequency of the gratings, but might be indicative of different neurons being specialized for different aspects of information integration
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have