Abstract

Attention can be measured by different types of cognitive tasks, such as Stroop, Eriksen Flanker, and Psychomotor Vigilance Task (PVT). Despite the differing content of the three cognitive tasks, they all require the use of visual attention. To learn the generalized representations from the electroencephalogram (EEG) of different cognitive attention tasks, extensive intra and inter-task attention classification experiments were conducted on three types of attention task data using SVM, EEGNet, and TSception. With cross-validation in intra-task experiments, TSception has significantly higher classification accuracies than other methods, achieving 82.48%, 88.22%, and 87.31% for Stroop, Flanker, and PVT tests respectively. For inter-task experiments, deep learning methods showed superior performance over SVM with most of the accuracy drops not being significant. Our experiments indicate that there is common knowledge that exists across cognitive attention tasks, and deep learning methods can learn generalized representations better.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call