Abstract
Graph contrastive learning is an effective method for enhancing graph representation by maximizing the similarity of representations between analogous graphs while minimizing the similarity between dissimilar ones. A common approach to improve the representation capabilities of graphs involves data augmentation, which generates additional training samples through transformations applied to the original graphs. However, traditional data augmentation techniques, such as random deletion of nodes or edges, often compromise the structural integrity of graphs, result in loss of crucial information, and lead to representation instability. To overcome these drawbacks, this study introduces a novel data augmentation paradigm, the integration of directed random noise (IDR), which incorporates controlled random noise to optimize the augmentation objectives. IDR enhances the diversity and robustness of representations without sacrificing the structural integrity of the graphs. This approach significantly improves the performance of graph contrastive learning, avoiding the common issues of graph structure damage and information loss associated with conventional methods. To further refine the model, this paper proposes an improved multiscale contrastive Siamese network framework that employs three Siamese networks to process different views of input graphs. It utilizes cross-network and cross-view contrastive learning objectives to optimize graph representations, leveraging the complementary information between different views to maximize the consistency of representations among similar graphs. This enhancement improves the quality and generalizability of graph representations. Additionally, the introduction of a self-supervised loss function based on graph reconstruction within the loss framework capitalizes on the structural similarity between the original and reconstructed graphs to further refine graph representations. This loss function ensures that the global view retains more structural information from the original graph, thus enhancing the complementarity between the global and local views. The effectiveness and stability of this research framework are demonstrated through node classification tasks and visualizations on six real-world datasets, comparing favorably against existing methods such as MERIT and GraphVAT. The proposed model achieves accuracy improvements of 3.21 % and 3.71 % on the Cora dataset and 1.12 % and 1.42 % on the CiteSeer dataset. Additionally, it ranks first in an average accuracy comparison across six datasets, with scores of 80.6 % and 52.67 %. These results underscore the robustness and efficacy of the proposed model in self-supervised graph representation learning, offering substantial advancements over existing techniques.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.