Abstract

Recently, the new paradigm “pre-train, prompt, and predict” has achieved remarkable few-shot learning achievements compared with the “pre-train, fine-tune” paradigm. Prompt-tuning inserts the prompt text into the input and converts the classification task into a masked language modeling task. One of the key steps is to build a projection between the labels and the label words, i.e., the verbalizer. Knowledgeable prompt-tuning (KPT), which integrates external knowledge into the verbalizer to improve and stabilize prompt-tuning. KPT uses word embeddings and various knowledge graphs to expand the label words space to hundreds of words per class. However, some unreasonable label words in the verbalizer may damage the accuracy. In this paper, a new method called KPT++ is proposed to improve the few-shot text classification. KPT++ is refined knowledgeable prompt-tuning, which can also be regarded as an upgraded version of KPT. Specifically, KPT++ uses two newly proposed prompt grammar refinement (PGR) and probability distribution refinement (PDR) to refine the knowledgeable verbalizer. Extensive experiments on few-shot text classification tasks demonstrate that our KPT++ outperforms state-of-the-art method KPT and other baseline methods. Furthermore, ablation experiments and case studies demonstrate the effectiveness of both PGR and PDR refining methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call