This study systematically reviews advancements in fake news detection techniques, examining methodologies and frameworks across machine learning, natural language processing, social network analysis, and multimodal approaches. By following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, a rigorous and transparent review process was conducted, resulting in the selection and evaluation of 254 research articles. The findings reveal that supervised learning models, such as Support Vector Machines, Decision Trees, and Naïve Bayes, have shown strong performance in text-based fake news classification, particularly when feature selection is optimized. Deep learning models, including CNNs, RNNs, and transformers, have further advanced detection accuracy by capturing complex linguistic patterns, though challenges with computational demands and model interpretability remain. In scenarios with limited labeled data, unsupervised learning models and semi-supervised approaches offer adaptability, with clustering, anomaly detection, and iterative self-labeling proving effective for evolving misinformation. Additionally, cross-disciplinary approaches, integrating insights from psychology, sociology, and network science, enhance detection models by accounting for user behavior, emotional appeal, and social conformity in the spread of fake news. Case studies from collaborative projects underscore the potential of interdisciplinary efforts to develop robust, adaptable detection frameworks. This review concludes that effective fake news detection requires a multifaceted approach, combining technical advancements with social science insights to address the complexity and adaptive nature of misinformation. The study emphasizes the need for continued research in hybrid models and adaptive, real-time detection solutions to strengthen defenses against fake news in diverse digital environments.
Read full abstract