Abstract

Abstract Real-time emotion recognition has been an active field of research over the past several decades. This work aims to classify physically disabled people (deaf, dumb, and bedridden) and Autism children's emotional expressions based on facial landmarks and electroencephalograph (EEG) signals using a convolutional neural network (CNN) and long short-term memory (LSTM) classifiers by developing an algorithm for real-time emotion recognition using virtual markers through an optical flow algorithm that works effectively in uneven lightning and subject head rotation (up to 25°), different backgrounds, and various skin tones. Six facial emotions (happiness, sadness, anger, fear, disgust, and surprise) are collected using ten virtual markers. Fifty-five undergraduate students (35 male and 25 female) with a mean age of 22.9 years voluntarily participated in the experiment for facial emotion recognition. Nineteen undergraduate students volunteered to collect EEG signals. Initially, Haar-like features are used for facial and eye detection. Later, virtual markers are placed on defined locations on the subject's face based on a facial action coding system using the mathematical model approach, and the markers are tracked using the Lucas-Kande optical flow algorithm. The distance between the center of the subject's face and each marker position is used as a feature for facial expression classification. This distance feature is statistically validated using a one-way analysis of variance with a significance level of p

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call