Abstract
How do high-level visual regions process the temporal aspects of our visual experience? While the temporal sensitivity of early visual cortex has been studied with fMRI in humans, temporal processing in high-level visual cortex is largely unknown. By modeling neural responses with millisecond precision in separate sustained and transient channels, and introducing a flexible encoding framework that captures differences in neural temporal integration time windows and response nonlinearities, we predict fMRI responses across visual cortex for stimuli ranging from 33 ms to 20 s. Using this innovative approach, we discovered that lateral category-selective regions respond to visual transients associated with stimulus onsets and offsets but not sustained visual information. Thus, lateral category-selective regions compute moment-to-moment visual transitions, but not stable features of the visual input. In contrast, ventral category-selective regions process both sustained and transient components of the visual input. Our model revealed that sustained channel responses to prolonged stimuli exhibit adaptation, whereas transient channel responses to stimulus offsets are surprisingly larger than for stimulus onsets. This large offset transient response may reflect a memory trace of the stimulus when it is no longer visible, whereas the onset transient response may reflect rapid processing of new items. Together, these findings reveal previously unconsidered, fundamental temporal mechanisms that distinguish visual streams in the human brain. Importantly, our results underscore the promise of modeling brain responses with millisecond precision to understand the underlying neural computations.
Highlights
How do high-level visual areas encode the temporal characteristics of our visual experience? The temporal sensitivity of early visual areas has been studied with electrophysiology in nonhuman primates [1,2,3,4] and recently using functional magnetic resonance imaging (fMRI) in humans [5, 6]
How does the brain encode the timing of our visual experience? Using functional magnetic resonance imaging and a generative temporal model with millisecond resolution, we discovered that visual regions in the lateral and ventral processing streams fundamentally differ in their temporal processing of the visual input
Regions in lateral temporal cortex process visual transients associated with the beginning and ending of the stimulus, but not its stable aspects
Summary
How do high-level visual areas encode the temporal characteristics of our visual experience? The temporal sensitivity of early visual areas has been studied with electrophysiology in nonhuman primates [1,2,3,4] and recently using fMRI in humans [5, 6]. Since the standard approach using a general linear model (GLM) to predict fMRI signals from the stimulus [8] is inadequate for modeling responses to such stimuli, the temporal processing characteristics of human high-level visual cortex have remained largely elusive (but see [12, 14,15,16,17]). We hypothesized that if nonlinearities are of neural (rather than BOLD) origin, a new approach that predicts fMRI responses by modeling neural nonlinearities can be applied to characterize temporal processing in high-level visual cortex. Recent studies show that accurately modeling neural responses to brief visual stimuli at millisecond resolution better predicts fMRI responses than the GLM [5, 6, 18]. Generative computational models of neural processing offer a framework that can provide key insights into multiple facets of temporal processing including integration time windows [19,20,21], temporal channel contributions [5, 18, 22,23,24,25], and response nonlinearities [5, 6, 9,10,11,12, 18]
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have