Abstract

Meta-learning has recently promoted few-shot text classification, which identifies target classes based on information transferred from source classes through a series of small tasks or episodes. Existing works constructing their meta-learner on Prototypical Networks need improvement in learning discriminative text representations between similar classes that may lead to conflicts in label prediction. The overfitting problems caused by a few training instances need to be adequately addressed. In addition, efficient episode sampling procedures that could enhance few-shot training should be utilized. To address the problems mentioned above, we first present a contrastive learning framework that simultaneously learns discriminative text representations via supervised contrastive learning while mitigating the overfitting problem via unsupervised contrastive regularization, and then we build an efficient self-paced episode sampling approach on top of it to include more difficult episodes as training progresses. Empirical results on eight few-shot text classification datasets show that our model outperforms the current state-of-the-art models. The extensive experimental analysis demonstrates that our supervised contrastive representation learning and unsupervised contrastive regularization techniques improve the performance of few-shot text classification. The episode-sampling analysis reveals that our self-paced sampling strategy improves training efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call