Abstract

Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the current study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention. We demonstrate that EEG can be used to create simultaneous timecourses of neural representations of attended features (timepoint-by-timepoint inverted encoding model reconstructions) and attended location (timepoint-by-timepoint decoding) during both stable periods and across dynamic shifts of attention. Each trial presented two oriented gratings that flickered at the same frequency but had different orientations; participants were cued to attend one of them, and on half of trials received a shift cue mid-trial. We trained models on a stable period from Hold attention trials, and then reconstructed/decoded the attended orientation/location at each timepoint on Shift attention trials. Our results showed that both feature reconstruction and location decoding dynamically track the shift of attention, and that there may be timepoints during the shifting of attention when (1) feature and location representations become uncoupled, and (2) both the previously-attended and currently-attended orientations are represented with roughly equal strength. The results offer insight into our understanding of attentional shifts, and the noninvasive techniques developed in the current study lend themselves well to a wide variety of future applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call