Abstract

Bringing emotion recognition (ER) out of the controlled laboratory setup into everyday life can enable applications targeted at a broader population, e.g., helping people with psychological disorders, assisting kids with autism, monitoring the elderly, and general improvement of well-being. This work reviews progress in sensors and machine learning methods and techniques that have made it possible to move ER from the lab to the field in recent years. In particular, the commercially available sensors collecting physiological data, signal processing techniques, and deep learning architectures used to predict emotions are discussed. A survey on existing systems for recognizing emotions in real-life scenarios—their possibilities, limitations, and identified problems—is also provided. The review is concluded with a debate on what challenges need to be overcome in the domain in the near future.

Highlights

  • Emotions are a basic component of our life, just like breathing or eating

  • Lee et al showed that the application of the signal processing and transformation methods, i.e., Independent component analysis (ICA), fast Fourier transform (FFT), truncated singular value decomposition, can reduce motion artifacts from the BVP signal recorded with a wearable device, resulting in more precise HR readings, even during intense exercises [37,38]

  • Studies on emotion recognition can be different in many ways, e.g., they may vary in: (1) the dataset used; (2) the emotional model applied; (3) the machine learning approach adopted; (4) the number of classification classes and their distribution; (5) the validation strategy; (6) whether the results are provided for train, validation, or test set; and (7) the performance quality measure

Read more

Summary

Introduction

Emotions are a basic component of our life, just like breathing or eating. They are responsible for a majority of our decisions [1]. This work provides the state of the art in the fields of wearable sensors, signal processing, and machine learning models and techniques, adequate for the emotion recognition from speech, facial images, and physiological signals. The primary focus of this review is on emotion recognition from the physiological signals because it can be performed continuously in everyday life using wearables, as opposed to facial and speech emotion recognition. The latter modalities are covered in this work. The abbreviations used in the article are listed in the Acronyms section at the end of the paper

Emotion Recognition from Physiology
Facial Emotion Recognition
Speech Emotion Recognition
Signal Processing and Transformation
Machine Learning Models and Techniques
Residual Networks
Long Short-Term Memory
Convolutional Neural Networks and Fully Convolutional Networks
End-to-End Deep Learning Approach
Representation Learning
Model Personalization
Existing Systems
Futures of Emotion Recognition
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.