Abstract

Pre-trained language models have demonstrated remarkable performance in few-shot learning through the emergence of “prompt-based learning” methods, where the performance of these tasks highly rely on the quality of prompts. Existing prompt learning methods typically customize a single prompt to each few-shot learning task and all the examples in the task share the universal prompt. However, a fine-grained prompt design can enhance the performance of few-shot learning task by leveraging more diverse information hidden in the set of examples. In light of this motivation, this paper introduce an example-specific prompt learning method to embody fine-grained self-adapting prompts for few-shot learning with pre-trained models. Specifically, we introduce the concept of the “weak consistency assumption”, to trade-off the task-specific consistent and example-specific diversity. Based on this assumption, a novel method called Self-adapting Continuous Prompt Learning (SP-learning) to learn example-specific prompts is proposed. It employs a cross-attention prompt generator that considers the characteristics of input samples and utilizes a diversity calibration technique to adjust the prompt generator accordingly. By personalizing prompts for each example, SP-learning aims to improve few-shot learning performance. We perform a systematic evaluation on 10 public benchmark tasks and our method outperforms 8 of those tasks. Our research sheds light on the importance of personalized prompts and opens up new possibilities for improving few-shot learning tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call