Abstract

SummaryRumor is one of the main problems in social media, which often shows deeply and rapidly undesirable affection on the society. Although many rumor detection models consider content features and social features, all of them are based on the word independence assumption, which lacks the sequence context. Thus, if we use some words that often appear in rumors, our posts will be recognized as a rumor. To solve this problem, we propose a deep sequence context model (DSCM) for Chinese microblog rumor detection. This model considers two important factors of rumors: falsity and influence. Firstly, to learn falsity, we abolish the word independence assumption and use long short‐term memory (LSTM) units to capture bi‐direction sequence context information in content. Secondly, to learn influence, we combine the deep sequence context information with social features to learn the connection between content and social features. In our experiment, our results show that our approach outperforms several state‐of‐the‐art machine learning approaches, including term frequency and inverse document frequency (TFIDF), LSTM, and gated recurrent unit (GRU) in rumor detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call