Abstract

Text classification is a typical application of natural language processing. At present, the most commonly used text classification method is deep learning. Meanwhile there are many difficulties in natural language processing, such as metaphor expression, semantic diversity and grammatical specificity. To solve these problems, this paper proposes the structure of BERT-BiGRU model. First, use the BERT model instead of the traditional word2vec model to represent the word vector, the word representation is calculated according to the context information, and it can be adjusted according to the meaning of word while the context information is fused. Secondly BiGRU model is attached to the BERT model, BiGRU model can extract the text information features from both directions at the same time. Multiple sets of experiments were set up and compared with the model proposed in this paper, according to the final experimental results, using the proposed BERT-BiGRU model for text classification, the final accuracy, recall and F1 score were all above 0.9. It shows that BERT-BiGRU model has good performance in the Chinese text classification task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call