Abstract

Emotion recognition has received considerable attention in recent years, with the popularity of social media. It is noted, however, that the state-of-the-art language models such as Bidirectional Encoder Representations from Transformers (BERT) may not produce the best performance in emotion recognition. We found the main cause of the problem is that the embedding of emotional words from the pre-trained BERT model may not exhibit high between-class difference and within-class similarity. While BERT model fine-tuning is a common practice when it is applied to specific tasks, this may not be practical in emotion recognition because most datasets are small and some texts are short and noisy, without containing much useful contextual information. In this paper, we propose to use the knowledge of emotion vocabulary to fine-tune embedding of emotional words. As a separate module independent of the embedding learning model, the fine-tuning model aims to produce emotional word embedding with improved within-class similarity and between-class difference. By combining the emotionally discriminative fine-tuned embedding with contextual information-rich embedding from pre-trained BERT model, the emotional features underlying the texts could be more effectively captured in the subsequent feature learning module, which in turn leads to improved emotion recognition performance. The knowledge-based word embedding fine-tuning model is tested on five datasets of emotion recognition, and the results and analysis demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call