Abstract

Graph representation learning aims to represent the structural and semantic information of graph objects as dense real value vectors in low dimensional space by machine learning. It is widely used in node classification, link prediction, and recommendation systems. However, directly computing the embeddings for original graphs is prohibitively inefficient, especially for large-scale graphs. To address this issue, we present the GSE (Graph Summarization Embedding) model, a more efficient model that computes the nodes’ embeddings based on graph summarization. Specifically, the model first searches for the minimum information entropy of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> groups to transform the original graph into a hypergraph with higher-order structural features. Next, the summarization graph’s connection probabilities are used to determine the biased random walks on the hypergraph, which then generates the sequences of the super-nodes. Finally, the node sequences are fed into the skip-gram to generate the vectors of these nodes. Our proposed model improves the efficiency of graph embedding on big data graphs and effectively alleviates the local optimal problem caused by the random walks. Experimental results demonstrate that GSE outperforms main existing clustering baselines, such as K_Means Clustering, Affinity Propagation Clustering, Canopy Clustering, and ACP Clustering. Moreover, our model can be coupled with the main graph embedding methods and improves the Macro-F1 scores and Micro-F1 scores for classification tasks on a variety of real-world graph data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call