Abstract

Multi-modal emotion recognition refers to the process of identifying human emotions using information from multiple sources, such as facial expressions, voice intonation, EEG signals etc. Ultimately, emotion recognition is poised to play a pivotal role in healthcare, education, customer service etc. As we progress, it is imperative to address privacy concerns associated with this technology in a responsible manner. Challenges in multi-modal emotion recognition include aligning data from different modalities in time, dealing with noisy or incomplete information. In this paper, we aim to address this issue by employing the SVM as our machine learning classifier. Here we use IEMOCAP for speech and video and DEAP dataset for EEG signals. After applying SVM we got 76.22 % accuracy for IEMOCAP and 68.89 % accuracy for DEAP dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call