Abstract

Emotion states recognition using wireless signals is an emerging area of research that has an impact on neuroscientific studies of human behaviour and well-being monitoring. Currently, standoff emotion detection is mostly reliant on the analysis of facial expressions and/or eye movements acquired from optical or video cameras. Meanwhile, although they have been widely accepted for recognizing human emotions from the multimodal data, machine learning approaches have been mostly restricted to subject dependent analyses which lack of generality. In this paper, we report an experimental study which collects heartbeat and breathing signals of 15 participants from radio frequency (RF) reflections off the body followed by novel noise filtering techniques. We propose a novel deep neural network (DNN) architecture based on the fusion of raw RF data and the processed RF signal for classifying and visualising various emotion states. The proposed model achieves high classification accuracy of 71.67% for independent subjects with 0.71, 0.72 and 0.71 precision, recall and F1-score values respectively. We have compared our results with those obtained from five different classical ML algorithms and it is established that deep learning offers a superior performance even with limited amount of raw RF and post processed time-sequence data. The deep learning model has also been validated by comparing our results with those from ECG signals. Our results indicate that using wireless signals for stand-by emotion state detection is a better alternative to other technologies with high accuracy and have much wider applications in future studies of behavioural sciences.

Highlights

  • The radio frequency (RF) reflections off the body are preprocessed and fed to machine learning (ML) algorithms to classify four basic emotions types, such as anger, sadness, joy and pleasure

  • We employ an appropriate deep neural network (DNN) architecture to process the time domain wireless signal (RF reflections off the body) and the corresponding frequency domain version obtained by continuous wavelet (CW) transformation

  • We identify two main reasons that explain why deep learning is superior in the current learning problem. Having both the time domain wireless signal and CW transformed image as an input is a rich source of learning for the convolutional neural network (CNN) +long short-term memory (LSTM) model whereas the ML algorithms are trained with extracted features as inputs, that are sensitive to the level of human judgement on selecting features as well as the obvious loss of information from the original data

Read more

Summary

Introduction

A deep neural network consisting of long short-term memory (LSTM) and convolutional layers is proposed to detect emotion states from physiological, environmental and location sensor data with excellent performance in subject dependent regime [29]. The WiFi based emotion sensing platform, such as EmoSense has been developed to capture physical body gestures and expressions by analysing the signal shadowing and multi-path effects with traditional machine learning algorithms. Several novel deep learning architectures have been proposed for the time series data processing such as gene expression classification and clustering [32] These approaches vary from simple multi-layer feed forward neural networks [33, 34] to more complex frameworks, such as LSTM based deepMirGene [35], recurrent neural network (RNN), autoencoder based DeepTarget [36] and fDNN. We propose that RF reflections can be an exceptional alternative to ECG or bulky wearables for subject-independent human emotion detection with high and comparable accuracy

Results
Discussion
Experimental study and data processing Ethical approval
Participants
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call