Fake news detection (FND) involves predicting the likelihood that a particular news article (news report, editorial, expose, etc.) is intentionally deceptive. Arabic FND started to receive more attention in the last decade, and many detection approaches demonstrated some ability to detect fake news on multiple datasets. However, most existing approaches do not consider recent advances in natural language processing, i.e., the use of neural networks and transformers. This paper presents a comprehensive comparative study of neural network and transformer‐based language models used for Arabic FND. We examine the use of neural networks and transformer‐based language models for Arabic FND and show their performance compared to each other. We also conduct an extensive analysis of the possible reasons for the difference in performance results obtained by different approaches. The results demonstrate that transformer‐based models outperform the neural network‐based solutions, which led to an increase in the F1 score from 0.83 (best neural network‐based model, GRU) to 0.95 (best transformer‐based model, QARiB), and it boosted the accuracy by 16% compared to the best in neural network‐based solutions. Finally, we highlight the main gaps in Arabic FND research and suggest future research directions.
Read full abstract