Abstract

Graph positive-unlabeled (GPU) learning aims to learn binary classifiers from only positive and unlabeled (PU) nodes. The state-of-the-art methods rely on provided class prior probabilistic and their performance lags far behind the fully labeled counterparts. To bridge the gap, we propose Bootstrap Latent Prototypes (BLP), a framework that consists of a graph representation learning module and a two-step strategy algorithm. The learning module bootstraps previous versions of node representations to serve as targets and learns enhanced representations by predicting the latent prototypes respectively for the P set and each individual in the U set. It eliminates the requirement for a class prior, while capturing positive similarity information, as well as the low-level semantic similarity and uniformity information, thereby producing closely aligned and discriminated representations for positive nodes. The algorithm module utilizes the obtained representations to select reliable negative nodes and train a binary classifier with both labeled positives and selected reliable negatives. Experimental results on diverse real-life datasets demonstrate that our proposed BLP method not only outperforms state-of-the-art approaches but also surpasses fully labeled classification models in most cases. The source code is available at https://github.com/lcq411/BLP.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.