Abstract

Recently, the prompt tuning technique, which incorporates prompts into the input of the pre-training language model (like BERT, GPT), has shown promise in improving the performance of language models when facing limited annotated data. However, the equivalence of template semantics in learning is not related to the effect of prompts and the prompt tuning often exhibits unstable performance, which is more severe in the domain of the scientific domain. To address this challenge, we propose to enhance prompt tuning using data augmentation with L2 regularization. Namely, pairing-wise training for the pair of the original and transformed data is performed. Our experiments on two scientific text datasets (ACL-ARC and SciCite) demonstrate that our proposed method significantly improves both accuracy and robustness. By using 1000 samples out of 1688 in the ACL-ARC training set, our method achieved an F1 score 3.33% higher than the same model trained on all 1688-sample data. In the SciCite dataset, our method surpassed the same model with labeled data reduced by over 93%. Our method is also proved to have high robustness, reaching F1 scores from 1% to 8% higher than those models without our method after the Probability Weighted Word Saliency attack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call