Abstract

Speech emotion recognition is a challenging topic and has many important applications in our real life, especially in terms of human-computer interaction. Traditional methods are based on the pipeline of pre-processing, feature extraction, dimensionality reduction and emotion classification. Previous studies have focussed on emotion recognition based on two different models: discrete model and continuous model. Both the speaker's age and gender affect the speech emotion recognition in the two models. Moreover, investigation results shown that the dimensional attributes of emotion such as arousal, valence and dominance are related to each other. Based on these observations, we propose a new attributes recognition model using Feature Nets, aims to improve the emotion recognition performance and generalisation capabilities. The method utilises the corpus to train the age and gender classification model, which will be transferred to the main model: a hierarchical deep learning model, using age and gender as the high level attributes of the emotion. The public databases EMO-DB and IEMOCAP have been conducted to evaluate the performance both in the classification task and regression task. Experiment results show that the proposed approach based on attributes transferring can improve the recognition accuracy, no matter transferring age or gender.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.