Electroencephalography (EEG)-based open-access datasets are available for emotion recognition studies, where external auditory/visual stimuli are used to artificially evoke pre-defined emotions. In this study, we provide a novel EEG dataset containing the emotional information induced during a realistic human-computer interaction (HCI) using a voice user interface system that mimics natural human-to-human communication. To validate our dataset via neurophysiological investigation and binary emotion classification, we applied a series of signal processing and machine learning methods to the EEG data. The maximum classification accuracy ranged from 43.3% to 90.8% over 38 subjects and classification features could be interpreted neurophysiologically. Our EEG data could be used to develop a reliable HCI system because they were acquired in a natural HCI environment. In addition, auxiliary physiological data measured simultaneously with the EEG data also showed plausible results, i.e., electrocardiogram, photoplethysmogram, galvanic skin response, and facial images, which could be utilized for automatic emotion discrimination independently from, as well as together with the EEG data via the fusion of multi-modal physiological datasets.
Read full abstract