Abstract

Since there are so many users on social media, who are not qualified to report news, fake news has become a major problem in recent years. Therefore, it is crucial to identify and restrict the dissemination of false information. Numerous deep learning models that make use of natural language processing have yielded excellent results in the detection of fake news. bidirectional encoder representations from transformers (BERT), based on transfer learning, is one of the most advanced models. In this work, the researchers have compared the earlier studies that employed baseline models versus the research articles where the researchers used a pretrained model BERT for the detection of fake news. The literature analysis revealed that utilizing pretrained algorithms is more effective at identifying fake news because it takes less time to train them and yields better results. Based on the results noted in this article, the researchers have advised the utilization of pretrained models that have already been taught to take advantage of transfer learning, which shortens training time and enables the use of large datasets, as well as a reputable model that performs well in terms of precision, recall, as well as the minimum number of false positive and false negative outputs. As a result, the researchers created an improved BERT model, while considering fine-tuning it to meet the demands of the fake news identification assignment. To obtain the most accurate representation of the input text, the final layer of this model is also unfrozen and trained on news texts. The dataset used in the study included 23 502 articles of fake news and 21 417 items of actual news. This dataset was downloaded from the Kaggle website. The results of this study demonstrated that the proposed model showed a better performance compared with other models, and achieved 99.96% and 99.96% in terms of accuracy and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$F$</tex-math> </inline-formula> 1 score, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.