Abstract
Contrastive learning (CL) has emerged as a powerful approach for self-supervised learning. However, it suffers from sampling bias, which hinders its performance. While the mainstream solutions, hard negative mining (HNM) and supervised CL (SCL), have been proposed to mitigate this critical issue, they do not effectively address graph CL (GCL). To address it, we propose graph positive sampling (GPS) and three contrastive objectives. The former is a novel learning paradigm designed to leverage the inherent properties of graphs for improved GCL models, which utilizes four complementary similarity measurements, including node centrality, topological distance, neighborhood overlapping, and semantic distance, to select positive counterparts for each node. Notably, GPS operates without relying on true labels and enables preprocessing applications. The latter aims to fuse positive samples and enhance representative selection in the semantic space. We release three node-level models with GPS and conduct extensive experiments on public datasets. The results demonstrate the superiority of GPS over state-of-the-art (SOTA) baselines and debiasing methods. In addition, the GPS has also been proven to be versatile, adaptive, and flexible.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on neural networks and learning systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.