Abstract

Nowadays, the recognition of emotions in people with sensory disabilities still represents a challenge due to the difficulty of generalizing and modeling the set of brain signals. In recent years, the technology that has been used to study a person’s behavior and emotions based on brain signals is the brain–computer interface (BCI). Although previous works have already proposed the classification of emotions in people with sensory disabilities using machine learning techniques, a model of recognition of emotions in people with visual disabilities has not yet been evaluated. Consequently, in this work, the authors present a twofold framework focused on people with visual disabilities. Firstly, auditory stimuli have been used, and a component of acquisition and extraction of brain signals has been defined. Secondly, analysis techniques for the modeling of emotions have been developed, and machine learning models for the classification of emotions have been defined. Based on the results, the algorithm with the best performance in the validation is random forest (RF), with an accuracy of 85 and 88% in the classification for negative and positive emotions, respectively. According to the results, the framework is able to classify positive and negative emotions, but the experimentation performed also shows that the framework performance depends on the number of features in the dataset and the quality of the Electroencephalogram (EEG) signals is a determining factor.

Highlights

  • The recognition of human emotions was proposed long ago as a way for the development of current computing, with the aim of designing machines that recognize emotions to improve the interaction between humans and computer systems (Picard, 2003)

  • The best results obtained from the random forest (RF) model in approaches A228 and B2-28 with negative and positive emotions and analysis of the features’ importance allowed us to recognize that the beta frequencies related to the frontotemporal areas of the brain are important in the decision making of the models

  • The results show that the algorithms logistic regression (LR), multilayer perceptron (MLP), K-nearest neighbors (KNN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), decision trees (DTs), and neural networks (NNs) obtain a lower performance compared to RF

Read more

Summary

Introduction

The recognition of human emotions was proposed long ago as a way for the development of current computing, with the aim of designing machines that recognize emotions to improve the interaction between humans and computer systems (Picard, 2003). It represents a challenge since this could mean that computers respond in real time and in a personalized way to the affective or emotional states of a person (Kumar et al, 2016). Various approaches and ways have been tested for the classification of emotions in people under different circumstances, like music (Vamvakousis and Ramirez, 2015), autism (El Kaliouby et al, 2006), the recognition of emotions using electrodermal activity sensors (Al Machot et al, 2019), or e-Healthcare applications (Ali et al, 2016)

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.