Abstract

Current word vector techniques can generate effective word representations from massive text data, and different deep learning models can extract various textual semantic and grammatical features from different perspectives and levels. To extract richer text sentiment features, we propose a fused model of text sentiment classification called BERT-CBLBGA, which combines BERT with multiple feature extraction deep learning models. First, BERT is utilized to generate word embedding of the text, and the obtained feature representations are then fed into TextCNN, BiLSTM, and BiGRU to extract local and global sentiment features of the text. Then we use self-attention to fuse the output of TextCNN, BiLSTM, and BiGRU to enhance the ability to extract text sentiment classification features. Finally, a classifier is employed to classify the extracted features. Experimental results on four different domain text datasets demonstrate that the BERT-CBLBGA outperforms the baseline models in terms of extracting text sentiment features and achieves the best classification performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call