In today’s post-truth world, the proliferation of propaganda and falsified news poses a deadly risk of misinforming the public on a variety of issues, either through traditional media or on social media. Information people acquire through these articles and posts tends to shape their world view and provides reasoning for choices they take in their day to day lives. Thus, fake news can definitely be a malicious force, having massive real-world consequences. In this paper, we focus on classifying fake news using models based on a natural language processing framework, Bidirectional Encoder Representations from Transformers, also known as BERT. We fine-tune BERT for specific domain datasets and also make use of human justification and metadata for added performance in our models. We determine that the deep-contextualizing nature of BERT is effective for this task and obtain significant improvement over binary classification, and minimal yet important improvement in six-label classification in comparison with previously explored models.
Read full abstract