Abstract
Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain ‘fills-in’ information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for example using an object tracking model that integrates visual signals and motion dynamics. In the present study, we used EEG and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus.
Highlights
Internally-generated representations of the world, as opposed to stimulus-driven representations, are important for day-to-day tasks such as constructing a mental map to give a stranger directions, remembering where you last saw a lost item, or tracking the location of a car that becomes occluded by another vehicle
Time-resolved multivariate analyses revealed that patterns of activity associated with visual processing in random sequences were associated with processing of visible and imagined spatial stimulus positions in the tracking task, but with different temporal dynamics
This study provides evidence that internal representations of spatial position rely on mechanisms of visual processing, but that these are applied with different temporal dynamics to actual perceptual processes
Summary
Internally-generated representations of the world, as opposed to stimulus-driven representations, are important for day-to-day tasks such as constructing a mental map to give a stranger directions, remembering where you last saw a lost item, or tracking the location of a car that becomes occluded by another vehicle In these cases, there is little or no relevant perceptual input, yet the brain successfully constructs a picture of relevant visual features such as object form and spatial position. Neural activation within the ventral stream is consistent with generative feedback models of information flow from higher-level to low-level visual regions (Breedlove et al, 2020) Consistent with this account, recent work using magnetoencephalography and time-resolved decoding showed that imagery of faces and houses involves similar patterns of activation as viewing those stimuli, but with different temporal dynamics. These findings suggest that overlapping mid- and high-level visual processes underlie perceptual and internally generated representations of spatial location, and that these are pre-activated in anticipation of a stimulus
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.