Abstract

Self-supervised learning aims to eliminate the need for expensive annotation in graph representation learning, where graph contrastive learning (GCL) is trained with the self-supervision signals containing data-data pairs. These data-data pairs in GCL are generated with augmentation employing stochastic functions on the original graph. We argue that some features can be more critical than others depending on the downstream task, and applying stochastic function uniformly vandalizes the influential features, leading to diminished accuracy. To fix this issue, we introduce a Feature Based Adaptive Augmentation (FebAA) approach, which identifies and preserves potentially influential features and corrupts the remaining ones. We implement FebAA as a plug-and-play layer and use it with state-of-the-art Deep Graph Contrastive Representation Learning (GRACE) and Large-Scale Representation Learning on Graphs via Bootstrapping (BGRL). We successfully improved the accuracy of GRACE and BGRL on eight graph representation learning benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call