Abstract

Emotion recognition in conversation has been one hot topic in natural language processing (NLP). Speaker information plays an important role in the dialogue system, especially speaker state closely related to emotion. Because of the increasing speakers, it is more challenging to model speakers’ state in multi-speaker conversation than in two-speaker conversation. In this paper, we focus on emotion detection in multi-speaker conversation–a more generalized conversation emotion task. We mainly try to solve two problems. First, the more speakers, the more difficulties we have to meet to model speakers’ interactions and get speaker state. Second, because of conversations’ temporal variations, it’s necessary to model speaker dynamic state in the conversation. For the first problem, we adopt graph structure which has expressive ability to model speaker interactions and speaker state. For the second problem, we use dynamic graph neural network to model speaker dynamic state. Therefore, we propose Dual View Dialogue Graph Neural Network (DVDGCN), a graph neural network to model both context-static and speaker-dynamic graph. The experimental results on a multi-speaker conversation emotion recognition corpus demonstrate the great effectiveness of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call