Abstract
Text classification is a fundamental task of natural language processing. The convolutional neural network (CNN) has been employed popularly and achieved excellent results on text classification. Nevertheless, the training parameters of CNN is prone to overfitting in the training process, which limits the performance of the convolutional neural network. Adversarial training is an effective regularization method to restrain overfitting to make the model robust against the worst perturbation. In this article, adversarial training is proposed in the convolutional neural network for text classification. In our model, the perturbation is applied to the word embedding layer, not the original input. We implement training and verification on five benchmark datasets. Our experiments indicate that the classifier with adversarial training performs better resisting small perturbation and effectively controls the overfitting. We also analyzed the influence of norm constraint parameters on the classifier and make the model find the appropriate parameters.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have