Abstract
Machine learning approaches have been fruitfully applied to several neurophysiological signal classification problems. Considering the relevance of emotion in human cognition and behaviour, an important application of machine learning has been found in the field of emotion identification based on neurophysiological activity. Nonetheless, there is high variability in results in the literature depending on the neuronal activity measurement, the signal features and the classifier type. The present work aims to provide new methodological insight into machine learning applied to emotion identification based on electrophysiological brain activity. For this reason, we analysed previously recorded EEG activity measured while emotional stimuli, high and low arousal (auditory and visual) were provided to a group of healthy participants. Our target signal to classify was the pre-stimulus onset brain activity. Classification performance of three different classifiers (linear discriminant analysis, support vector machine and k-nearest neighbour) was compared using both spectral and temporal features. Furthermore, we also contrasted the classifiers’ performance with static and dynamic (time evolving) features. The results show a clear increase in classification accuracy with temporal dynamic features. In particular, the support vector machine classifiers with temporal features showed the best accuracy (63.8 %) in classifying high vs low arousal auditory stimuli.
Highlights
In last decades, the vision of the brain has moved from a passive stimuli elaborator to an active reality builder
Starting from the abovementioned studies, we focused on the extension of brain anticipatory activity to statistically unpredictable emotional stimuli
Note that all the accuracies refer to the same static classification problem, performed using different classifiers (SVM, linear discriminant analysis (LDA), k-nearest neighbour (kNN)) and features, on different groups (Ps_Im, passive sound (Ps_So), active image (Ac_Im), active sound (Ac_So))
Summary
The vision of the brain has moved from a passive stimuli elaborator to an active reality builder. The brain is able to extract information from the environment, building up inner models of external reality. These models are used to optimize the behavioural outcome when reacting to upcoming stimuli[1,2,3,4]. One of the main theoretical models assumes that the brain, in order to regulate body reaction, runs an internal model of the body in the world, as described by embodied simulation framework[5]. It is possible to consider emotions, as a reaction to the external world, and as partially shaped by our internal representation of the environment, which help us to anticipate possible scenarios and to regulate our behaviour
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.