Abstract

Predicting information diffusion cascade is an essential task in social networks. We mainly focus on predicting the size of the information cascade. The relationships inside a cascade are diverse, including global and relative spatio-temporal relationships, as well as interpersonal influence relationships. These complex relationships between nodes play a crucial role in cascade prediction, but they have not been thoroughly investigated. The Transformer's global receptive field can assist in capturing the relationships between two arbitrary nodes. However, using Transformer directly for a cascade is insufficient without considering its temporal and structural characteristics. In this paper, we propose a novel cascade Transformer for the first time, called CasTformer, specifically designed for cascade size prediction. CasTformer utilizes a global spatio-temporal positional encoding and relative relationship bias matrices on the self-attention mechanism to capture diverse cascade relationships. Moreover, self-knowledge distillation is employed for obtaining a better cascade representation to enhance prediction performance. We use four datasets with nearly millions of cascade samples to validate our model and it achieves training in 3 hours. Experimental results show that it outperforms state-of-the-art methods by an average of 11.9%, 6.1%, and 9.6% on MSLE, MAPE, and R2, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call