When it comes to natural language processing, the textual differentiation task is one of the classical and important research problems. Recently, the deep learning model has increasingly become one of the main methods to solve text classification problems. Common deep learning text classification models are convolutional neural networks (CNN), recurrent neural networks (RNN), the BERT model. For comparing the manifestation of various deep learning models in textual differentiation tasks horizontally, the thesis tests the classification accuracy of different deep learning models under the same experimental configuration. The experimental results show that using pre-trained word vectors helps to improve the classification accuracy of deep learning models. In addition, the reasonable design of more complex and larger deep learning models is helpful in enhancing the study capability of the specific model on text data. The experimental results indicate that the text classification model using pre-trained word vectors could gain higher accuracy than the model without pre-trained word vectors. In addition, in the comparison experiment of feedforward neural network (FNN), CNN, RNN and BERT model, BERT model performs best, and the text classification accuracy reaches 0.9232. Compared with a 1-layer FNN, the accuracy rate is increased by about 16%.