Abstract

Recently, with the superior capacity for analyzing the multiplex graph data, self-supervised multiplex graph representation learning (SMGRL) has received much interest. However, existing SMGRL methods are still limited by the following issues: (i) they generally ignore the noisy information within each graph and the common information among different graphs, thus weakening the effectiveness of SMGRL, and (ii) they conduct negative sample encoding and complex pretext tasks for contrastive learning, thus weakening the efficiency of SMGRL. To solve these issues, in this work, we propose a new framework to conduct effective and efficient SMGRL. Specifically, the proposed method investigates the intra-graph and inter-graph decorrelation losses, respectively, for reducing the impact of noisy information within each graph and capturing the common information among different graphs, to achieve the effectiveness. Moreover, the proposed method does not need negative samples for the SMGRL and designs a simple pretext task, to achieve the efficiency. We further theoretically justify that our method achieves the maximal mutual information instead of directly conducting contrastive learning and theoretically justify that our method actually minimizes the multiplex graph information bottleneck, which guarantees the effectiveness. In addition, an extension for semi-supervised scenarios is proposed to fit the case that a few labels are provided in reality. Extensive experimental results verify the effectiveness and efficiency of the proposed method with respect to various downstream tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call