Abstract
Emotion recognition in conversation (ERC) aims to detect the emotion in a conversation, which has drawn increasing interests due to its widely applications. Current methodologies mainly endeavor to capture a good representation of conversation context. However, we argue that the conversation context are not always consistent with the emotion evolution. This incongruity can greatly restrict the recognition performance. To address aforementioned challenges, in this paper, we propose an emotion evolution network for emotion recognition in conversation (E2Net). Specifically, a speaker-aware modeling methodology is firstly constructed to fuse the utterance from conversations. We employ the gated recurrent unit (GRU) encodes the utterance sequentially. For encoding the interaction between speakers, a listener state is introduced to aid in analyzing conversation context. Then, a Transformer-based method is proposed to capture the emotion evolution accompanying with the emotion transformation matrix. To demonstrate the superior performance of our proposed method, extensive experiments are conducted on four REC datasets and the experimental results suggest that our method is effective and outperforms the current state-of-the-art methods on multiple datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.