Abstract

AbstractGraphs are a common and important data structure, and networks such as the Internet and social networks can be represented by graph structures. The proposal of Graph Convolutional Network (GCN) brings graph research into the era of deep learning and has achieved better results than traditional methods on various tasks. For ordinary neural networks, more layers can often achieve better results. However, for GCN, the deepening of the number of layers will cause a catastrophic decline in performance, including the gradual indistinguishability of node features, the disappearance of gradients, and the inability to update weights. This phenomenon is called over-smoothing. The occurrence of over-smoothing makes training deep GCNs a difficult problem. Compared with deep GCNs, shallow GCNs tend to perform better. Therefore, we design a contrastive learning model such that the deep GCN learns the features of the same node (positive samples) of the shallow GCN while alienating the features of other nodes (negative samples) , so that the deep GCN can learn the performance of the shallow GCN. Experiments show that our method can effectively alleviate the over-smoothing phenomenon. At the same time, we apply this model to other over-smoothing methods, and also achieve better results.KeywordsGraph convolutional networkGraph contrastive learningOver-smoothing

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call