Abstract

While there have been significant advances in detecting emotions in text, in the field of utterance-level emotion recognition (ULER), there are still many problems to be solved. In this paper, we address some challenges in ULER in dialog systems. (1) The same utterance can deliver different emotions when it is in different contexts. (2) Long-range contextual information is hard to effectively capture. (3) Unlike the traditional text classification problem, for most datasets of this task, they contain inadequate conversations or speech. (4) To better model the emotional interaction between speakers, speaker information is necessary. To address the problems of (1) and (2), we propose a hierarchical transformer framework (apart from the description of other studies, the “transformer” in this paper usually refers to the encoder part of the transformer) with a lower-level transformer to model the word-level input and an upper-level transformer to capture the context of utterance-level embeddings. For problem (3), we use bidirectional encoder representations from transformers (BERT), a pretrained language model, as the lower-level transformer, which is equivalent to introducing external data into the model and solves the problem of data shortage to some extent. For problem (4), we add speaker embeddings to the model for the first time, which enables our model to capture the interaction between speakers. Experiments on three dialog emotion datasets, Friends, EmotionPush, and EmoryNLP, demonstrate that our proposed hierarchical transformer network models obtain competitive results compared with the state-of-the-art methods in terms of the macro-averaged F1-score (macro-F1).

Highlights

  • Sentiment analysis, considered one of the most important methods for analyzing real-world communication, is a kind of classification task for extracting emotion from language

  • To address the problems of (1) and (2), we propose a hierarchical transformer framework with a lower-level transformer to model the word-level input and an upper-level transformer to capture the context of utterance-level embeddings

  • We report the weighted accuracy (WA) and unweighted accuracy (UWA), which were adopted in a previous study [30]

Read more

Summary

Introduction

Sentiment analysis, considered one of the most important methods for analyzing real-world communication, is a kind of classification task for extracting emotion from language. It can help us progress in many fields. We consider one of the tasks in this research direction, utterance-level emotion recognition (ULER) [1]. In the field of ULER, contextual information is indispensable in emotional discrimination. To identify a speaker’s emotion precisely, Hazarika et al [3] proposed contextual representations for prediction with a recurrent neural network (RNN), where each utterance is represented by a feature vector extracted

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call