Abstract

Here we introduce a new python package, img2fmri, to predict group-level fMRI responses to individual images. This prediction model uses an artificial deep neural network (DNN), as DNNs have been successful at predicting cortical responses in the human visual cortex when trained on real world visual categorization tasks. To validate our model, we predict fMRI responses to images our model has not previously seen from a new dataset. We then show how our frame-by-frame prediction model can be extended to a continuous visual stimulus by predicting an fMRI response to Pixar Animation Studio’s short film Partly Cloudy. In analyzing the timepoint-timepoint similarity of our predicted fMRI response around human-annotated event boundaries in the movie, we find that our model outperforms the baseline model in describing the dynamics of the real fMRI response around these event boundaries, particularly in the timepoints just before and at an event. These analyses suggest that in visual areas of the brain, at least some of the temporal dynamics we see in the brain’s processing of continuous, naturalistic stimuli can be explained by dynamics in the stimulus itself, since they can be predicted from our frame-by-frame model. All code, analyses, tutorials, and installation instructions can be found at https://github.com/dpmlab/img2fmri.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call