Abstract

Although humans can direct their attention to visual targets with or without eye movements, it remains unclear how different brain mechanisms control visual attention and eye movements together and/or separately. Here, we measured MEG and fMRI data during covert/overt visual pursuit tasks and estimated cortical currents using our previously developed extra-dipole, hierarchical Bayesian method. Then, we predicted the time series of target positions and velocities from the estimated cortical currents of each task using a sparse machine-learning algorithm. The predicted target positions/velocities had high temporal correlations with actual visual target kinetics. Additionally, we investigated the generalization ability of predictive models among three conditions: control, covert, and overt pursuit tasks. When training and testing data were the same tasks, the largest reconstructed accuracies were overt, followed by covert and control, in that order. When training and testing data were selected from different tasks, accuracies were in reverse order. These results are well explained by the assumption that predictive models consist of combinations of three computational brain functions: visual information-processing, maintenance of attention, and eye-movement control. Our results indicate that separate subsets of neurons in the same cortical regions control visual attention and eye movements differently.

Highlights

  • Humans can direct their attention to visual targets with or without eye movements, it remains unclear how different brain mechanisms control visual attention and eye movements together and/or separately

  • This study examined the relationship between mechanisms that govern maintenance of attention at a moving object and mechanisms that govern maintenance of fixation on a moving object, by investigating generalization ability of machine-learning-based predictive models from MEG signals to the target motion among three experimental conditions: control, covert pursuit, and overt pursuit tasks

  • When task types of training and test data were the same, we divided all single-trial data into training and test datasets and predicted time series of target positions and velocities from estimated cortical currents

Read more

Summary

Introduction

Humans can direct their attention to visual targets with or without eye movements, it remains unclear how different brain mechanisms control visual attention and eye movements together and/or separately. When training and testing data were selected from different tasks, accuracies were in reverse order These results are well explained by the assumption that predictive models consist of combinations of three computational brain functions: visual information-processing, maintenance of attention, and eye-movement control. Thompson et al investigated the link between FEF activity and covert spatial attention by recording from FEF visual and saccade-related neurons in monkeys performing covert visual search tasks without eye movements They reported that there exist different neural populations for saccades and visual selection in F­ EF9–13. Ohlendorf et al investigated effects of dissociating visual attention and gaze directions during smooth pursuit eye movements using functional magnetic resonance imaging (fMRI)[14] They found that covert and overt pursuit activated the cortical oculomotor network indicating that covert and overt pursuit are processed by similar neural networks.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call