Abstract

Deep learning techniques have demonstrated significant advancements in the task of text classification. Regrettably, the majority of these techniques necessitate a substantial corpus of annotated data to achieve optimal performance. Meta-learning has yielded intriguing outcomes in few-shot learning tasks, showcasing its potential in advancing the field. However, the current meta-learning methodologies are susceptible to overfitting due to the mismatch between a small number of samples and the complexity of the model. To mitigate this concern, we propose a Prompt-based Graph Convolutional Adversarial (PGCA) meta-learning framework, aiming to improve the adaptability of complex models in a few-shot scenario. Firstly, leveraging prompt learning, we generate embedding representations that bridge the downstream tasks. Then, we design a meta-knowledge extractor based on a graph convolutional neural network (GCN) to capture inter-class dependencies through instance-level interactions. We also integrate the adversarial network architecture into a meta-learning framework to extend sample diversity through adversarial training and improve the ability of the model to adapt to new tasks. Specifically, we mitigate the impact of extreme samples by introducing external knowledge to construct a list of class prototype extensions. Finally, we conduct a series of experiments on four public datasets to demonstrate the effectiveness of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call