Abstract

Emotion recognition in conversation aims to identify the emotion of each consistent utterance in a conversation from several pre-defined emotions. The task has recently become a new popular research frontier in natural language processing because of the increase in open conversational data and its application in opinion mining. However, most existing methods for the task cannot capture the long-range contextual information in an utterance and a conversation effectively. To alleviate this problem, we propose a novel hierarchical attention network with residual gated recurrent unit framework. Firstly, we adopt the pre-trained BERT-Large model to obtain context-dependent representation for each token of each utterance in a conversation. Then, a hierarchical attention network is proposed to capture long-range contextual information about the conversation structure. Besides, in order to better model position information of the utterances in a conversation, we add position embedding to the input of the multi-head attention. Experiments on two textual dialogue emotion datasets demonstrate that our model significantly outperforms the state-of-the-art baseline methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.