Abstract

In recent years, contrastive learning has been very successful in unsupervised tasks of representation learning and has received a lot of attention in supervised tasks. In supervised tasks, the discrete nature of natural language makes the construction of sample pairs difficult and the models are poorly robust to adversarial samples, so it remains a challenge to make contrastive learning effective for text classification tasks and to guarantee the robustness of the models. This paper presents a contrastive adversarial learning framework built using data augmentation with labeled insertion data. Specifically,By adding perturbation to the word-embedding matrix, adversarial samples are generated as positive examples of contrastive learning, and external semantic information is introduced to construct negative examples. Contrastive learning is used to improve the sensitivity and generalization ability of the model, and adversarial training is used to improve robustness, thereby improving the classification accuracy. In addition, the momentum contrast from unsupervised tasks is also introduced into the text classification task to increase the number of sample pairs. Experimental results on several datasets show that the proposed approach outperforms the baseline comparison approach, and in addition some experiments are conducted to verify the effectiveness of the proposed framework under low-resource conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call