Abstract

Facial expression recognition plays an important role in research on human–computer interaction. The common facial expressions are mixtures of six basic emotions: anger, disgust, fear, happiness, sadness, and surprise. The current study, however, focused on a single basic emotion on the basis of physiological signals. We proposed emotion distribution learning (EDL) based on surface electromyography (sEMG) for predicting the intensities of basic emotions. We recorded the sEMG signals from the depressor supercilii, zygomaticus major, frontalis medial, and depressor anguli oris muscles. Six features were extracted in the frequency, time, time–frequency, and entropy domains. Principal component analysis (PCA) was used to select the most representative features for prediction. The key idea of EDL is to learn a function that maps the PCA-selected features to the facial expression distributions such that the special description degrees of all basic emotions for an emotion can be learned by EDL. Simultaneously, Jeffrey's divergence considered the relationship between different basic emotions. The performance of EDL was compared with that of multilabel learning based on PCA-selected features. Predicted results were measured by six indices, which could reflect the distance or similarity degree between distributions. We conducted an experiment on six different emotion distributions. Experimental results show that the EDL can predict the facial expression distribution more accurately than the other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call