Abstract

Memory replay, which stores a subset of representative historical data from previous tasks to replay while learning new tasks, exhibits state-of-the-art performance for various continual learning applications on Euclidean data. While topological information plays a critical role in characterizing graph data, existing memory replay based graph learning techniques only store individual nodes for replay and do not consider their associated edge information. To this end, we propose a sparsified subgraph memory (SSM), which sparsifies the selected computation graphs into fixed size before storing them into the memory. In this way, we can reduce the memory consumption of a computation subgraph from $\mathcal{O}(d^{L})$ to $\mathcal{O}(1)$, and for the first time enable GNNs to utilize the explicit topological information for memory replay. Finally, our empirical studies show that SSM outperforms state-of-the-art approaches by up to 27.8% on four different public datasets. Unlike existing methods which focus on task incremental learning (task-IL) setting, SSM succeeds in the challenging class incremental learning (class-IL) setting in which a model is required to distinguish all learned classes without task indicators, and even achieves comparable performance to joint training which is the performance upper bound for continual learning. Our code is available at https://github.com/QueuQ/SSM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call