Abstract

Emotion classification is one of the most important tasks of natural language processing (NLP). It focuses on identifying each kind of emotion expressed in text. However, most of the existing models are based on deep learning methods, which often suffer from long training time, difficulties in convergence and theoretical analysis. To solve the above problems, we propose a method for emotion classification of text based on bidirectional encoder representation from transformers (BERT) and broad learning system (BLS) in this paper. The texts are input into BERT pre-trained model to obtain context-related word embeddings and all word vectors are averaged to obtain sentence embedding. The feature nodes and enhancement nodes of BLS are used to extract the linear and nonlinear features of text, and three cascading structures of BLS are designed to transform input data to improve the ability of text feature extraction. The two groups of features are fused and input into the output layer to obtain the probability distribution of each kind of emotion, so as to achieve emotion classification. Extensive experiments are conducted on datasets from SemEval-2019 Task 3 and SMP2020-EWECT, and the experimental results show that our proposed method can better reduce the training time and improve the classification performance than that of the baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call