Abstract
In human-computer interaction, Speech Emotion Recognition (SER) plays an essential role in understanding the user's intent and improving the interactive experience. While similar sentimental speeches own diverse speaker characteristics but share common antecedents and consequences, an essential challenge for SER is how to produce robust and discriminative representations through causality between speech emotions. In this paper, we propose a Gated Multi-scale Temporal Convolutional Network (GM-TCNet) to construct a novel emotional causality repre- sentation learning component with a multi-scale receptive field. GM-TCNet deploys a novel emotional causality representation learning component to capture the dynamics of emotion across the time domain, constructed with dilated causal convolutions layer and gating mechanism. Besides, it utilizes skip connection fusing high-level fea- tures from different Gated Convolution Blocks (GCB) to capture abundant and subtle emotion changes in human speech. GM-TCNet first uses a single type of feature, Mel-Frequency Cepstral Coefficients (MFCC), as inputs and then passes them through the Gated Temporal Convolutional Module (GTCM) to generate the high-level fea- tures. Finally, the features are fed to the emotion classifier to accomplish the SER task. The experimental results show that our model maintains the highest performance in most cases, with +0.90% to +18.50% and +0.55% to +20.15% average relative improvement on the weighted average recall and unweighted average recall compared to state-of-the-art techniques. The source code is available at: https://github.com/Jiaxin-Ye/GM-TCNet for SER.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.