Abstract

A detailed understanding of visual object representations in brain and behavior is fundamentally limited by the number of stimuli that can be presented in any one experiment. Ideally, the space of objects should be sampled in a representative manner, with (1) maximal breadth of the stimulus material and (2) minimal bias in the object categories. Such a dataset would allow the detailed study of object representations and provide a basis for testing and comparing computational models of vision and semantics. Towards this end, we recently developed the large-scale object image database THINGS of more than 26,000 images of 1,854 object concepts sampled representatively from the American English language (Hebart et al., 2019). Here we introduce THINGS-fMRI and THINGS-MEG, two large-scale brain imaging datasets using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). Over the course of 12 scanning sessions, 7 participants (fMRI: n = 3, MEG: n = 4) were presented with images from the THINGS database (fMRI: 8,740 images of 720 concepts, MEG: 22,448 images of 1,854 concepts) while they carried out an oddball detection task. To reduce noise, participants’ heads were stabilized and repositioned between sessions using custom head casts. To facilitate the use by other researchers, the data were converted to the Brain Imaging Data Structure format (BIDS; Gorgolewski et al., 2016) and preprocessed with fMRIPrep (Esteban et al., 2018). Estimates of the noise ceiling and general quality control demonstrate overall high data quality, with only small overall displacement between sessions. By carrying out a broad and representative multimodal sampling of object representations in humans, we hope this dataset to be of use for visual neuroscience and computational vision research alike.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call