Abstract

Few-shot text classification aims to learn a classifier from very few labeled text instances per class. Previous few-shot research works in NLP are mainly based on Prototypical Networks, which encode support set samples of each class to prototype representations and compute distance between query and each class prototype. In the prototype aggregation progress, much useful information of support set and discrepancy between samples from different class are ignored. In contrast, our model focuses on all query-support pairs without information loss. In this paper, we propose a multi-perspective aggregation based graph neural network (Frog-GNN) that observes through eyes (support and query instance) and speaks by mouth (pair) for few-shot text classification. We construct a graph by pre-trained pair representations and aggregate information from neighborhoods by instance-level representations for message-passing. The final relational features of pairs imply intra-class similarity and inter-class dissimilarity after iteratively interactions among instances. In addition, our Frog-GNN with meta-learning strategy can well generalize to unseen class. Experimental results demonstrate that the proposed GNN model outperforms existing few-shot approaches in both few-shot text classification and relation classification on three benchmark datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.