Abstract

To further extend the applicability of wearable sensors in various domains such as mobile health systems and the automotive industry, new methods for accurately extracting subtle physiological information from these wearable sensors are required. However, the extraction of valuable information from physiological signals is still challenging—smartphones can count steps and compute heart rate, but they cannot recognize emotions and related affective states. This study analyzes the possibility of using end-to-end multimodal deep learning (DL) methods for affect recognition. Ten end-to-end DL architectures are compared on four different datasets with diverse raw physiological signals used for affect recognition, including emotional and stress states. The DL architectures specialized for time-series classification were enhanced to simultaneously facilitate learning from multiple sensors, each having their own sampling frequency. To enable fair comparison among the different DL architectures, Bayesian optimization was used for hyperparameter tuning. The experimental results showed that the performance of the models depends on the intensity of the physiological response induced by the affective stimuli, i.e., the DL models recognize stress induced by the Trier Social Stress Test more successfully than they recognize emotional changes induced by watching affective content, e.g., funny videos. Additionally, the results showed that the CNN-based architectures might be more suitable than LSTM-based architectures for affect recognition from physiological sensors.

Highlights

  • Emotions are complex states that result in psychological and physiological changes that influence our behaving and thinking [1]

  • Convolutional network (FCN) [44], Residual network (Resnet) [45], Multi layer perceptron (MLP) [44], Encoder [46], Time convolutional neural network (Time-Convolutional Neural Networks (CNNs)) [47], Multichannel deep convolutional neural network (MCDCNN) [47], Spectrotemporal residual network (Stresnet) [48], Convolutional neural network with long-short term memory (CNN-Long-Short Term Memory (LSTM)) [5], Multi layer perceptron with long-short term memory (MLP-LSTM), and Architectures 1–6 were taken from a review of deep learning (DL) architectures for time-series classification [20]

  • The existing DL architectures specialized for time-series classification were enhanced to enable learning from several sensors simultaneously in an end-to-end manner

Read more

Summary

Introduction

Emotions are complex states that result in psychological and physiological changes that influence our behaving and thinking [1]. The emotional state of fear usually initiates rapid heartbeat, rapid breathing, sweating, and muscle tension. These physiological changes can be captured by sensors embedded into wearable devices that can measure [3]: . Electrocardiography (ECG), which represents cardiac electrical activity, Sensors 2020, 20, 6535; doi:10.3390/s20226535 www.mdpi.com/journal/sensors electroencephalography (EEG), brain electrical activity, electromyography (EMG), muscle activity, Blood Volume Pulse (BVP), cardiovascular dynamics, Electrodermal activity (EDA), sweating level, electrooculography (EOG), eye movements, respiration rate (RESP), facial muscle activation (EMO), emotional activation, and body temperature (TEMP). A barometer, an altimeter, ambient light, temperature sensors, and GPS may be useful as additional data sources

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.