Abstract

Within a single speech emotion corpus, deep neural networks have shown decent performance in speech emotion recognition. However, the performance of the emotion recognition based on data-driven learning methods degrades significantly for the cross-corpus scenario. To relieve this issue without any labeled samples from the target domain, we propose a cross-corpus speech emotion recognition based on few-shot learning and unsupervised domain adaptation, which is trained to learn the class (emotion) similarity from the source domain samples adapted to the target domain. In addition, we utilize multiple corpora in training to enhance the robustness of the emotion recognition to the unseen samples. Experiments on emotional speech corpora with three different languages showed that the proposed method outperformed other approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call