Abstract

The paper presents a Multi-Head Attention deep learning network for Speech Emotion Recognition (SER) using Log mel-Filter Bank Energies (LFBE) spectral features as the input. The multi-head attention along with the position embedding jointly attends to information from different representations of the same LFBE input sequence. The position embedding helps in attending to the dominant emotion features by identifying positions of the features in the sequence. In addition to Multi-Head Attention and position embedding, we apply multi-task learning with gender recognition as an auxiliary task. The auxiliary task helps in learning the gender specific features that influence the emotion characteristics in speech and results in improved accuracy of Speech Emotion Recognition, the primary task. We conducted all our experiments on IEMOCAP dataset. We are able to achieve an overall accuracy of 76.4% and average class accuracy of 70.1%, which are 5.3% and 6.2% higher respectively than the state-of-the-art models available on SER for four emotion classes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call