Abstract

An information dissemination network (i.e., a cascade) with a dynamic graph structure is formed when a novel idea or message spreads from person to person. Predicting the growth of cascades is one of the fundamental problems in social network analysis. Existing deep learning models for cascade prediction are primarily based on recurrent neural networks and representation on random walks or propagation paths. However, these models are not sufficient for learning the deep spatial and temporal features of an entire cascade. Therefore, a new model, called Cascade2vec, is proposed to learn the dynamic graph representation of cascades based on graph recurrent neural networks. To learn more effective graph-level representation of cascades, the current graph neural networks are improved by designing a graph residual block, which shares attention weights between nodes, and by transforming features through perception layers. Furthermore, the proposed graph neural network is integrated into a recurrent neural network to learn the temporal features between graphs. With this method, both the spatial and temporal characteristics of cascades are learned in Cascade2vec. The experimental results show that our method significantly reduces the mean squared logarithmic error and median squared logarithmic error by 16.1% and 12%, respectively, in the cascade prediction at one hour in the Microblog network dataset compared with strong baselines.

Highlights

  • Online social network platforms facilitate the dissemination of novel ideas, news, and messages [1]–[3]

  • We extend the current graph neural networks (GNNs) motivated by the idea from ResNet [25] and Transformer [26], and propose a new graph neural network model, called Graph Perception Network (GPN)

  • The results show that the proposed method GPN achieves a 12.4% error reduction in terms of the mean squared logarithmic error (MSLE) compared to the strong baseline DeepCas, while the Graph Convolutional Network (GCN) performs to DeepCas

Read more

Summary

INTRODUCTION

Online social network platforms facilitate the dissemination of novel ideas, news, and messages [1]–[3]. Z. Huang et al.: Cascade2vec: Learning Dynamic Cascade Representation by Recurrent Graph Neural Networks TABLE 1. Based on the graph recurrent neural network model, Cascade2vec is introduced to learn both the spatial and temporal features of. Z. Huang et al.: Cascade2vec: Learning Dynamic Cascade Representation by Recurrent Graph Neural Networks cascades. Instead of learning an embedding feature for each user as in DeepCas [10] and DeepHawkes [11], Cascade2vec models the cascades as dynamic graphs and incorporates the users’ representation features into the representation learned by the graph neural networks. Our contributions in this paper are as follows: To improve the graph-level representation of graphs in cascades, we propose a more powerful graph neural network that outperforms current GNNs in graphlevel classification and regression tasks. Our codes and processed data will be available at https://github.com/zhenhuascut/Cascade2vec

RELATED WORKS
GRAPH FEATURES
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call