Abstract

Ubiquitous use of social media such as microblogging platforms opens unprecedented chances for false information to diffuse online. Facing the challenges in such a so-called “post-fact” era, it is very important for intelligent systems to not only check the veracity of information but also verify the authenticity of the users who spread the information, especially in time-critical situations such as real-world emergencies, where urgent measures have to be taken for stopping the spread of fake information. In this work, we propose a novel machine-learning-based approach for automatic identification of the users who spread rumorous information on Twitter by leveraging computational trust measures, in particular the concept of Believability. We define believability as a measure for assessing the extent to which the propagated information is likely being perceived as truthful or not based on the proxies of trust such as user’s retweet and reply behaviors in the network. We hypothesize that the believability between two users is proportional to the trustingness of the retweeter/replier and the trustworthiness of the tweeter, which are complementary to one another for representing user trust and can be inferred from trust proxies using a variant of HITS algorithm. With the trust network edge-weighted by believability scores, we apply network representation learning algorithms to generate user embeddings, which are then used to classify users into rumor spreaders or not based on recurrent neural networks (RNN). Experimented on a large real-world rumor dataset collected from Twitter, it is demonstrated that our proposed RNN-based method can effectively identify rumor spreaders and outperform four more straightforward, non-RNN models with large margin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call