Abstract

Human Machine Interface (HMI) depends on emotion detection, especially for hospitalized patients. The emergence of the fourth industrial revolution (4IR) has heightened the interest in emotional intelligence in human–computer interaction (HCI). This work employs electroencephalography (EEG), an optical flow algorithm, and machine learning to create a multimodal intelligent real-time emotion recognition system. The objective is to assist hospitalized patients, disabled (deaf, mute, and bedridden) individuals, and autistic youngsters in expressing their authentic feelings. We fed our multimodal feature fusion vector to a classifier with long short-term memory (LSTM). We distinguished six fundamental emotions: anger, disgust, fear, sadness, joy, and surprise. The fusion feature vector was created utilizing the patient's geometric facial characteristics and EEG inputs. Utilizing 14 EEG inputs, we used four-band relative power channels, namely alpha (8–13 Hz), beta (13–30 Hz), gamma (30–49 Hz), and theta (4–8 Hz). We achieved a maximum recognition rate of 90.25 percent using just facial landmarks and 87.25 percent using only EEG data. When both facial and EEG streams were examined, we achieved 99.3 percent accuracy in a multimodal method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call