Abstract

To combat false information, social media sites have heavily relied on content moderation, mostly performed by human workers. However, human content moderation entails multiple problems, including huge labor costs, ineffectiveness, and ethical issues. To overcome these concerns, social media companies are aggressively investing in the development of artificial intelligence-powered false information detection systems. Extant efforts, however, have failed to understand the nature of human argumentation, delegating the process of making inferences of the truth to the black box of neural networks. They fail to attend to important aspects of how humans make judgments on the veracity of an argument, creating important challenges. To this end, based on Toulmin’s model of argumentation, we propose a computational framework that helps machine learning for false information identification understand the connection between a claim (whose veracity needs to be verified) and evidence (which contains information to support or refute the claim). The two experiments for testing model performance and explainability reveal that our framework shows stronger performance and better explainability, outperforming cutting-edge machine learning methods and presenting positive effects on human task performance, trust in algorithms, and confidence in decision making. Our results shed new light on the growing field of automated false information identification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call