Abstract

In recent years, population receptive field models fit to fMRI data have been used to characterize responses in human visual cortex (Dumoulin and Wandell, 2008, Neuroimage). Initial population receptive field models represented stimuli as binary contrast masks, but recent models have incorporated a larger range of visual computations in order to predict responses directly from the image pixels. The model-based approach summarizes diverse findings from visual neuroscience and elucidates the visual computations performed in each area. FMRI has the particular advantage of enabling quick and effective exploration of computational properties throughout the entire visual system. A recent two-stage cascade model was shown to account for much of the observed variance of BOLD responses in early visual areas in response to a wide range of band-limited grayscale images (Kay et al. 2013, PLoS CB). However, the existing model still exhibits certain systematic failures. Response amplitudes to contrast patterns with a single orientation are systematically overpredicted, while amplitudes to contrast patterns with extended curved edges are systematically underpredicted. Here we asked whether updating the model to reflect additional findings about the visual response properties of neurons could improve model performance. Physiology, psychophysics, and theory support the hypothesis that neuronal responses are suppressed by an orientation-tuned surround (Schwartz and Simoncelli, 2001, Nature Neuroscience; Cavanaugh, Bair, and Movshon, 2002, Journal of Neurophysiology). We revisited data from Kay et al. and implemented an additional computation: a spatially extended, orientation-tuned divisive normalization step. The updated model produced systematically better predictions in V1/V2/V3, without introducing additional parameters or degrees of freedom. These results show steady progress in building increasingly accurate models of the computations performed in the human visual pathways. Meeting abstract presented at VSS 2015

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call