Abstract

Graph machine learning (Graph ML) models typically require abundant labeled instances to provide sufficient supervision signals, which is commonly infeasible in real-world scenarios since labeled data for newly emerged concepts (e.g., new categorizations of nodes) on graphs is rather limited. To efficiently learn with a small amount of data on graphs, meta-learning has been investigated in Graph ML. By transferring the knowledge learned from previous experiences to new tasks, graph meta-learning approaches have demonstrated promising performance on few-shot graph learning problems. However, most existing efforts predominately assume that all the data from the seen classes is gold labeled, yet those methods may lose their efficacy when the seen data is weakly labeled with severe label noise. As such, we aim to investigate a novel problem of weakly supervised graph meta-learning for improving the model robustness in terms of knowledge transfer. To achieve this goal, we propose Meta-GIN (Meta Graph Interpolation Network), a new graph meta-learning framework. Based on a new robustness-enhanced episodic training paradigm, Meta-GIN is meta-learned to interpolate node representations from weakly labeled data and extracts highly transferable meta-knowledge, which enables the model to quickly adapt to unseen tasks with few labeled instances. Extensive experiments demonstrate the superiority of Meta-GIN over existing graph meta-learning studies on the task of weakly supervised few-shot node classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call