Abstract
When it comes to natural language processing, the textual differentiation task is one of the classical and important research problems. Recently, the deep learning model has increasingly become one of the main methods to solve text classification problems. Common deep learning text classification models are convolutional neural networks (CNN), recurrent neural networks (RNN), the BERT model. For comparing the manifestation of various deep learning models in textual differentiation tasks horizontally, the thesis tests the classification accuracy of different deep learning models under the same experimental configuration. The experimental results show that using pre-trained word vectors helps to improve the classification accuracy of deep learning models. In addition, the reasonable design of more complex and larger deep learning models is helpful in enhancing the study capability of the specific model on text data. The experimental results indicate that the text classification model using pre-trained word vectors could gain higher accuracy than the model without pre-trained word vectors. In addition, in the comparison experiment of feedforward neural network (FNN), CNN, RNN and BERT model, BERT model performs best, and the text classification accuracy reaches 0.9232. Compared with a 1-layer FNN, the accuracy rate is increased by about 16%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.