Abstract

Indiscriminate elimination of harmful fake news risks destroying satirical news, which can be benign or even beneficial, because both types of news share highly similar textual cues. In this work we applied a recent development in neural network architecture, transformers, to the task of separating satirical news from fake news. Transformers have hitherto not been applied to this specific problem. Our evaluation results on a publicly available and carefully curated dataset show that the performance from a classifier framework built around a DistilBERT architecture performed better than existing machine-learning approaches. Additional improvement over baseline DistilBERT was achieved through the use of non-standard tokenization schemes as well as varying the pre-training and text pre-processing strategies. The improvement over existing approaches stands at 0.0429 (5.2%) in F1 and 0.0522 (6.4%) in accuracy. Further evaluation on two additional datasets shows our framework’s ability to generalize across datasets without diminished performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.