Abstract
There are complex graph structures and rich textual information on social networks. Text provides important information for various tasks, while graph structures offer multilevel context for the semantics of the text. Contemporary researchers tend to represent these kinds of data by text-attributed graphs (TAGs). Most TAG-based representation learning methods focus on designing frameworks that convey graph structures to large language models (LLMs) to generate semantic embeddings for downstream graph neural networks (GNNs). However, these methods only provide text attributes for nodes, which fails to capture the multilevel context and leads to the loss of valuable information. To tackle this issue, we introduce the Multilevel Context Learner (MCL) model, which leverages multilevel context on social networks to enhance LLMs' semantic embedding capabilities. We model the social network as a multilevel context textual-edge graph (MC-TEG), effectively capturing both graph structure and semantic relationships. Our MCL model leverages the reasoning capabilities of LLMs to generate semantic embeddings by integrating these multilevel contexts. The tailored bidirectional dynamic graph attention layers are introduced to further distinguish the weight information. Experimental evaluations on six real social network datasets show that the MCL model consistently outperforms all baseline models. Specifically, the MCL model achieves prediction accuracies of 77.98%, 77.63%, 74.61%, 76.40%, 72.89%, and 73.40%, with absolute improvements of 9.04%, 9.19%, 11.05%, 7.24%, 6.11%, and 9.87% over the next best models. These results demonstrate the effectiveness of the proposed MCL model.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have