Abstract
Complex networks contain numerous unlabeled data. Extracting the information in these data and obtaining appropriate node representation poses a significant challenge. Recently, contrastive learning has gained wide acceptance as one of the most important methods in self-supervised learning. It has shown exceptional ability in capturing both the attribute and structural information in network, that provides a new way for network representation. In this paper, we propose the enhanced contrastive representation in network (ECRN). First, we recognize the issue that contrastive learning ignores the network structure in calculating the similarity of the joint distribution. To address the problem, ECRN proposes a feature combination method with additional structure information. Second, we retain the connection relationship of nodes in the low-dimensional embedding by constraining each layer of neural network. Third, we propose a new graph neural network framework, which independently compares each layer, and realizes hierarchical contrastive learning. ECRN obtains information-rich node representations by accumulating representations in each layer, enhancing the performance of downstream tasks. We tested the ECRN model on real datasets and compared its performance with well-known algorithms. Experiments show that the ECRN outperforms state-of-art methods in link prediction, clustering, and node classification tasks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.