Abstract

Multi-party conversation modeling plays a vital role in emotion recognition in conversation (ERC). Aside from the intra- and inter-speaker dependencies between different speakers, the difficulty also lies in the fact that each conversation may contain several to many utterances that compose a long text sequence. In this article, we present two approaches to effective multi-party conversation modeling. First, to encode long sequences and capture long-range dependency between utterances, we introduce a dialog-oriented language model, DialogXL, with enhanced memory to store longer conversation sequences and dialog-aware self-attention to deal with multi-party dependencies. Second, we present a directed acyclic neural network, namely DAG-ERC, to encode the utterances with a directed acyclic graph (DAG) to better capture the intrinsic structure within a conversation. DAG-ERC combines the advantages of recurrent models and graph models and provides a more intuitive way to model information flow between sequential utterances. Extensive experiments are conducted on four ERC benchmarks with state-of-the-art models employed for comparison, and empirical results demonstrate the superiority of the two models in multi-party conversation modeling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call