Abstract
Text classification is a typical application of natural language processing. At present, the most commonly used text classification method is deep learning. Meanwhile there are many difficulties in natural language processing, such as metaphor expression, semantic diversity and grammatical specificity. To solve these problems, this paper proposes the structure of BERT-BiGRU model. First, use the BERT model instead of the traditional word2vec model to represent the word vector, the word representation is calculated according to the context information, and it can be adjusted according to the meaning of word while the context information is fused. Secondly BiGRU model is attached to the BERT model, BiGRU model can extract the text information features from both directions at the same time. Multiple sets of experiments were set up and compared with the model proposed in this paper, according to the final experimental results, using the proposed BERT-BiGRU model for text classification, the final accuracy, recall and F1 score were all above 0.9. It shows that BERT-BiGRU model has good performance in the Chinese text classification task.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.