Abstract

The modeling of conversational context is an essential step in Emotion Recognition in Conversations (ERC). To maintain high performance and a low GPU memory consumption, this article proposes a new idea of using multiple hypergraphs to model the conversational context and designs a multi-hypergraph feature aggregation network for ERC. We use context window, speaker information, position information between utterances, and specific step size to construct different hyperedges. Then, various hypergraphs generated by different hyperedges are used to aggregate local and remote context information in turn. Experiments on two dialogue emotion datasets, IEMOCAP and MELD, demonstrate the effectiveness and superiority of this new model. In addition, our model requires only relatively low GPU memory consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call