Abstract

In recent years, the few-shot text classification task has received increasing attention, researchers have proposed various methods to enable the model to classify text in a few labeled text instances. The traditional metric-based learning models map text features into a new vector space, allowing similar points closer and dissimilar points farther away, but these models focus too much on the semantic information of the text itself, ignoring the potiental relationship information between texts. However, the graph neural network is good at discovering such information, so a prototype network based on the distributed graph neural network is proposed to solve this problem. First, the distributed graph neural network will use both distribution-level information and instance-level information, and let them convert to each other when updating to make them fully fused, and then use the prototype network for classification. The experimental results demonstrate that the proposed model has improved classification accuracy compared with other few-shot text classification models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call