Abstract

After misinformation became prevalent in 2020, the research community started prioritizing creating state of the art (SOTA) fake news detectors. However, these models did little in changing user attitudes towards misinformation. Therefore, we try to increase trust between users and AI fake news detectors by implementing an explanatory moderator. We started with two research questions: (1) can long texts like normal news articles perform well in current fake news detectors meant for short texts, and (2) can we create a fake news detector that can achieve comparable high performances to SOTA fake news detectors while representing its classifications in explainable visualizations. To fulfill our first research question, we picked WELFake, a dataset containing news articles from four different news platforms. In order to create a comparable, SOTA fake news detector performance, we ran preliminary models of Majority Class Baseline, Random Forest Classifier with bag of words, and the third place model from the AAAI 2021 Shared Task: COVID-19 Fake News Detection in English competition with WELFake. Lastly, we fulfilled our second research question by making a manually fine-tuned BERT model to access attention masks that we could visualize through BertViz. Our manually fine-tuned BERT model outperformed our comparable, SOTA Two-Fold Four-Model ensemble with a 99.99% test accuracy. We made conclusions that current SOTA fake news detectors made for short texts can perform the same level of accuracy with long texts and explanatory fake news detectors can be comparable to current SOTA models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call