Abstract

In this article, we address Emotion Recognition in Conversation (ERC) where conversational data are presented in a multimodal setting. Psychological evidence shows that self and inter-speaker influence are two central factors to emotion dynamics in conversation. State-of-the-art models do not effectively synthesise these two factors. Therefore, we propose an Adapted Dynamic Memory Network (A-DMN) where self and inter-speaker influences are modelled individually and further synthesised oriented towards the current utterance. Specifically, we model the dependency of the constituent utterances in a dialogue video using a global RNN to capture inter-speaker influence. Likewise, each speaker is assigned an RNN to capture their self influence. Afterwards, an Episodic Memory Module is devised to extract contexts for self and inter-speaker influence and synthesise them to update the memory. This process repeats itself for multiple passes until a refined representation is obtained and used for final prediction. Additionally, we explore cross-modal fusion in the context of multimodal ERC, and propose a convolution-based method which proves effective in extracting local interactions and computationally efficient. Extensive experiments demonstrate that A-DMN outperforms the state-of-the-art models on benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call