Abstract

Few-Shot Learning, a subfield of machine learning, aims to solve the problem of learning new tasks with only a small amount of annotated data. Compared to traditional supervised learning, Few-Shot Learning is more challenging because there are very few training samples available during the training process, which means that the model must learn quickly and generalize to new samples. Prompt learning is a recently emerged training paradigm in natural language processing, which can quickly leverage the language capabilities of large pre-trained language models to achieve fast start-up. Based on the prompt learning paradigm, this paper proposes to use its conjugate tasks to further enhance the model's ability in few-shot learning. The experimental results show that the proposed method can effectively improve the performance of the model on multiple datasets and can be combined quickly with other methods for joint optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call