Abstract
Research on brain signals as indicators of a certain attentional state is moving from laboratory environments to everyday settings. Uncovering the attentional focus of individuals in such settings is challenging because there is usually limited information about real-world events, as well as a lack of data from the real-world context at hand that is correctly labeled with respect to individuals' attentional state. In most approaches, such data is needed to train attention monitoring models. We here investigate whether unsupervised clustering can be combined with physiological synchrony in the electroencephalogram (EEG), electrodermal activity (EDA), and heart rate to automatically identify groups of individuals sharing attentional focus without using knowledge of the sensory stimuli or attentional focus of any of the individuals. We used data from an experiment in which 26 participants listened to an audiobook interspersed with emotional sounds and beeps. Thirteen participants were instructed to focus on the narrative of the audiobook and 13 participants were instructed to focus on the interspersed emotional sounds and beeps. We used a broad range of commonly applied dimensionality reduction ordination techniques-further referred to as mappings-in combination with unsupervised clustering algorithms to identify the two groups of individuals sharing attentional focus based on physiological synchrony. Analyses were performed using the three modalities EEG, EDA, and heart rate separately, and using all possible combinations of these modalities. The best unimodal results were obtained when applying clustering algorithms on physiological synchrony data in EEG, yielding a maximum clustering accuracy of 85%. Even though the use of EDA or heart rate by itself did not lead to accuracies significantly higher than chance level, combining EEG with these measures in a multimodal approach generally resulted in higher classification accuracies than when using only EEG. Additionally, classification results of multimodal data were found to be more consistent across algorithms than unimodal data, making algorithm choice less important. Our finding that unsupervised classification into attentional groups is possible is important to support studies on attentional engagement in everyday settings.
Highlights
Research on brain signals as indicators of mental state, such as attention, is moving from laboratory environments to everyday settings
A complete overview of clustering performance for all used combinations of mapping algorithms and clustering algorithms based on physiological synchrony in either EEG, electrodermal activity (EDA), or heart rate is presented in Supplementary Table A1
The best performance is obtained using physiological synchrony in EEG [Mdn = 73%, Inter Quartile Range (IQR) = 12% across algorithms], with a maximum clustering accuracy of 85% when using spectral clustering on the raw distance matrix or after applying Principle Coordinate Analysis (PCoA) ordination
Summary
Research on brain signals as indicators of mental state, such as attention, is moving from laboratory environments to everyday settings. In a supervised machine learning approach, a model is trained with data recorded when information was available about events, and about the mental state of the individuals, to enable discrimination between the mental states of interest for unseen data collected when contextual information is limited. Such paradigms have been widely applied, for instance to recognize the emotional response to videos (Soleymani et al, 2011, 2015), to distinguish between different mental workload conditions (Hogervorst et al, 2014) or to estimate the attentional state of individuals (Abiri et al, 2019; Vortmann et al, 2019). In everyday settings, the ground truth mental state information needed in the training phase is often not available (Brouwer et al, 2015)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.