Abstract

Text classification is the most basic task of natural language processing. The continuous improvement of deep learning has driven the ongoing success of the natural language process. However, the current deep-learning text classification tasks rely heavily on a vast number of annotated data. Therefore, the value of few-shot learning is reflected. In the classification task of few-shot text classification, each category provides only a few samples. Due to the diversity of text and the similarity among samples in different categories, the predictions are biased. We propose an enhanced prototype network with hybrid loss to address the existing issues in few-shot text classification. We designed instance-level attention and hybrid loss for the model to improve the prototypes’ feature representation capabilities. Experimental results show that the model proposed in this paper performs best under several sets of few-shot scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call