Abstract

A goal in visual neuroscience is to explain how neurons respond to natural scenes. However, neurons are generally tested using simpler stimuli, often because they can be transformed smoothly, allowing the measurement of tuning functions (i.e., response peaks and slopes). Here, we test the idea that all classic tuning curves can be viewed as slices of a higher-dimensional tuning landscape. We use activation-maximizing stimuli ("prototypes") as landmarks in a generative image space and map tuning functions around these peaks. We find that neurons show smooth bell-shaped tuning consistent with radial basis functions, spanning a vast image transformation range, with systematic differences in landscape geometry from V1 to inferotemporal cortex. By modeling these trends, we infer that neurons in the higher visual cortex have higher intrinsic feature dimensionality. Overall, these results suggest that visual neurons are better viewed as signaling distances to prototypes on an image manifold.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call