Articles published on Fake news
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
9833 Search results
Sort by Recency
- New
- Research Article
- 10.1016/j.physa.2026.131396
- Apr 1, 2026
- Physica A: Statistical Mechanics and its Applications
- Diana Riazi + 1 more
This study investigates who should bear the responsibility of combating the spread of misinformation in social networks. Should that be the online platforms or their users? Should that be done by debunking the ‘fake news’ already in circulation or by investing in preemptive efforts to prevent their diffusion altogether? We seek to answer such questions in a stylized opinion dynamics framework, where agents in a network aggregate the information they receive from peers and/or from influential external sources, with the aim of learning a ground-truth among a set of competing hypotheses. In most cases, we find centralized sources to be more effective at combating misinformation than distributed ones, suggesting that online platforms should play an active role in the fight against fake news. In line with literature on the ‘backfire effect’, we find that debunking in certain circumstances can be a counterproductive strategy, whereas some targeted strategies (akin to ‘deplatforming’) and/or preemptive campaigns turn out to be quite effective. Despite its simplicity, our model provides useful guidelines that could inform the ongoing debate on online disinformation and the best ways to limit its damaging effects.
- New
- Research Article
- 10.1016/j.engappai.2026.114153
- Apr 1, 2026
- Engineering Applications of Artificial Intelligence
- Chao Cheng + 1 more
Multi-granularity alignment and cross-modal reasoning for fake news video explanation
- New
- Research Article
- 10.30574/ijsra.2026.18.3.0471
- Mar 31, 2026
- International Journal of Science and Research Archive
- Aliraja Ansari + 4 more
The pace with which digital news media and social media started proliferating has intensified the rate of misinformation, bringing significant problems with the reliability of information and the credibility of the people. Traditional fake news detection systems rely on the traditional way of machine learning systems, which are fed by predefined data sets, which restricts its flexibility to new events and real-time changes. Also, stand-alone large language models (LLMs) are likely to be susceptible to failing to base their responses on the existing evidence, which in turn leads to a high risk of hallucination and contextual bias. The aim of the paper is to suggest a real-time AI-based News Verification System that will be built by incorporating Retrieval-Augmented Generation (RAG) with a Large Language Model (Gemini 2.0 Flash) to achieve context-sensitive and explainable news content verification. It is founded on a modular architecture of a rest-based architecture written in FastAPI as a backend and Next.js as a frontend, MongoDB as a persistence layer and JWT as an authentication. The Tavily Search API retrieves real-time contextual evidence and then uses it together with the logic of LLM to ensure they become more credible and less groundless. The framework generates ordered output in terms of classification label (Real/Fake), credibility score (0-100%), summary of explanation and identification of suspicious phrases. Performance assessment identifies a mean of 2.8 seconds response latency time when the system is stable with simultaneous API requests. The suggested architecture offers an architecture which offers scalability, modularity, and production readiness to detect misinformation in real-time in dynamic digital environments.
- Research Article
- 10.4314/gjedr.v25i1.1
- Mar 9, 2026
- Global Journal of Educational Research
- Racheal Daniel Ama-Abasi + 2 more
The study examined demographic factors and the dissemination of misinformation on social media among undergraduate students in University of Calabar. Two purposes and two research questions were formulated. The study adopted the descriptive survey research design, with a sample of 256 undergraduate students from the department of Mass Communication, University of Calabar for the 2023/2024 academic session. The major instrument used for data collection was Social Media Use and Fake News Dissemination Questionnaire (SMUFNDQ). Data collection was done by the researcher who randomly administered the instruments to the students. The data collected were analyzed using frequency counts and the results of the analysis revealed that over 50% of the respondents have a negative perception of fake news dissemination. Also there was a significant relationship between social media and fake news dissemination among the students’. It was also revealed that the probability of sharing misinformation on social media was higher among female students, and students between the ages of 26-30 years. This spread of misinformation is also prevalent among year two mass communication students and also, that students used Facebook more to disseminate misinformation compared to other social media platforms. It was concluded that the university management should put up some check mechanism to reduce the dissemination of fake news among students generally. The study recommended among others that the school authority should workout modalities to reduce spread of misinformation among students of higher institutions.
- Research Article
- 10.3329/iiucs.v21i1.85090
- Mar 9, 2026
- IIUC Studies
- Shadeka Jannat
Social media has become an essential component of modern communication, allowing people and organizations to immediately contact a large audience. In Bangladesh, where Islam is the most practiced religion, social media platforms have grown to be effective resources for promoting Da'wah (inviting people to Islam) and distributing Islamic information. This study explores the positive and negative effects of social media on spreading Da'wah and Islamic knowledge in Bangladesh. A qualitative approach has been followed to conduct this study. Data have been collected through document analysis. According to the findings, the positive impacts of social media in spreading Da’wah and Islamic knowledge are: spreading the Holy Quran, Sunnah, and many Islamic apps, showing the true face of Islam, the ease of spreading Da’wah, sharing authentic knowledge, etc., and the negative impacts are: false news is generated and spread through social media, lack of authenticity and verification, misinformation and misinterpretation, spread of extremist views etc. This study will be helpful for young generation to use the social media for the purpose of goodness so that they can attain success in here and hereafter. IIUC Studies, Vol.-21, Issue-1, Dec. 2024, pp. 149-168
- Research Article
- 10.1007/s00530-025-02201-w
- Mar 9, 2026
- Multimedia Systems
- Guangyue Wu + 3 more
Multi-domain feature enhanced adaptive fusion network for multi-modal fake news detection
- Research Article
- 10.3389/fcomp.2026.1655186
- Mar 4, 2026
- Frontiers in Computer Science
- Viana Nijia Zhang + 3 more
Introduction This United Kingdom (UK)-based study examines how online tools and technologies shape young adults’ interactions with misinformation and fake news in everyday contexts, integrating insights from young adults and key stakeholders from both public and private sectors. Methods Through two data collection workshops—a stakeholder engagement session (N=22) and a co-design workshop with young adults aged 18 to 25 (M=7), we explored the challenges that young people face when encountering and interacting with misinformation and fake news online. Additionally, we examined the design of privacy-enhancing technologies, as well as the innovation and policy development priorities highlighted by our stakeholders. Results Our findings point to how young adults become vulnerable to exploitation by malicious actors online in various contexts, especially focusing on emotionally vulnerable life events. Our findings also emphasise the need for more empirical research that engages young adults within enclosed online communities, such as online gaming voice channels, where opinions can become radicalised, emotions intensified, and young adults desensitised. Discussion We propose implications for designing harm-reducing tools through increasing young people’s individual agency, equipping them with the skills to recognise, assess, and address misinformation whilst also enhancing their algorithmic and new media literacy. We also advocate for increased reciprocal interactions and collaboration between mainstream and marginalised communities. These recommendations aim to guide the education sector, parents, policymakers, media professionals, technology designers, and other stakeholders in exercising collective agency and fostering collaborative efforts to share communications and values that contribute to safeguarding a safer online environment for young adults.
- Research Article
- 10.1080/02185377.2026.2636494
- Mar 4, 2026
- Asian Journal of Political Science
- Almas Arzikulov + 3 more
ABSTRACT The purpose of this study was to analyze the current trends of the information and political agenda in the Republic of Kazakhstan. The study provides a comparative analysis of events in Kazakhstan in the Kazakh state-funded media, as well as in the English and American media. The network of coincidences and tone analysis of selected 200 newspaper articles, 400 items from 3 social media platforms, and 150 video materials of Kazakh media published in 2022–2024 were used. Using qualitative content analysis, the study managed to identify the themes in the Kazakh media that emerge as the principal problems in the country: the land issue, the language issue, the interethnic issue, and the socio-economic issue. The study determined the tone of voice of articles in Kazakh media and foreign media. Therewith, the speech of the President of Kazakhstan and events in the country were covered in Kazakh print media critically, while on social media channels, the nature of the tone was neutral. As a result of the analysis, a significant amount of fake news was observed. These results show how propaganda campaigns can play a vital role in both limiting and promoting concrete news in Kazakh society.
- Research Article
- 10.1080/02533839.2026.2630738
- Mar 2, 2026
- Journal of the Chinese Institute of Engineers
- Swarna Sudha M + 3 more
ABSTRACT Fake news continues to be a growing concern across social media platforms, where users encounter a mix of text and images that can be misleading or completely false. A new deep learning framework is proposed that processes and combines both text and image information to improve fake news identification. The model uses a Gated Recurrent Unit (GRU) to analyze textual features, optimized through a metaheuristic technique called the Sparrow Search Optimizer (SSO), which fine-tunes the model’s internal structure for better performance. Visual content is processed using ResNet-101, a proven convolutional neural network for extracting meaningful patterns from images. These features are then merged using Multi-Modal Bilinear Pooling (MBP), a technique that effectively combines both types of data to create a more complete representation of the news content. A softmax classifier is used at the final stage to label the content as real or fake. This hybrid model was tested using Twitter and Weibo datasets containing a wide range of real and fake news samples. The results showed a significant improvement in classification accuracy over text-only or image-only models. By integrating visual and textual elements, this system offers a more reliable solution to fake news detection in today’s multimedia-driven digital landscape.
- Research Article
- 10.1037/xge0001887
- Mar 1, 2026
- Journal of experimental psychology. General
- Daniel A Effron + 2 more
Ranked among the most serious global threats, misinformation spreads in part because people share it on social media. Based on theories that people usually share misinformation unintentionally, interventions typically aim to curb misinformation's spread by helping people distinguish fact from falsehood. However, people sometimes intentionally spread misinformation despite recognizing its falsity. Understanding and curbing this type of sharing requires new theory and tools. Leveraging insights from moral psychology, the present research examines whether people will be more reluctant to share misinformation when they think carefully about its moral implications. Engaging in such moral deliberation, we theorize, leads people to judge misinformation as more unethical to share, which inhibits them from forming intentions to share it. Five experiments (four preregistered, N = 2,509 U.S. and U.K. social media users, including a demographically representative U.S. sample) tested a moral-deliberation procedure in which participants list reasons why it would be ethical or unethical to share different news headlines on social media. This procedure-relative to control conditions that prompted nonmoral deliberation, prompted nondeliberative thinking about morality, or included no prompt-reduced intentions to share fake news about business, health, and politics, even when the news was flagged as false. These effects were (a) larger when the fake news aligned with participants' politics, (b) reversed for real news, (c) still observed after a delay, and (d) mediated by moral judgments. Our results offer a theoretical foundation for new tools to fight society's "infodemic" of misinformation. (PsycInfo Database Record (c) 2026 APA, all rights reserved).
- Research Article
- 10.11591/ijict.v15i1.pp179-188
- Mar 1, 2026
- International Journal of Informatics and Communication Technology (IJ-ICT)
- Siva Dhievaraj + 1 more
Concern over bio medical fake news is rising, particularly as false information about illnesses, medical procedures, and public health regulations becomes more prevalent. It is essential to recognize such false information, and deep learning (DL) algorithms can offer a potent remedy, especially when paired with sophisticated natural language processing (NLP) methods. This technique improves the model's capacity to ignore frequently used but uninformative terms and concentrate on important terminology. The model's capacity to concentrate on the most pertinent phrases for fake news identification is enhanced by the use of chi-squared, a statistical test that ascertains the dependency between various variables and aids in the removal of unnecessary data. By reducing less significant characteristics to zero, the Lasso approach, a kind of regression, is used for feature selection, guaranteeing that the model only utilizes the most predictive features for classification. A crucial step in getting the data ready for DL models is feature extraction, which turns unprocessed text into numerical data. After the structured data has been analyzed, algorithms like as stochastic gradient descent (SGD), long short-term memory (LSTM) may determine whether or not an article is accurate. The authenticity and dependability of medical information provided across platforms may be ensured by effectively identifying biomedical fake news by fusing DL with sophisticated NLP techniques.
- Research Article
- 10.1016/j.automatica.2025.112745
- Mar 1, 2026
- Automatica
- Qingsong Liu + 2 more
Dynamics of opinion propagation with memory and fake news
- Research Article
- 10.1016/j.im.2025.104293
- Mar 1, 2026
- Information & Management
- Alireza Farnoush + 4 more
Towards developing fake and satire news detection policies using component-based SEM and interpersonal detection theory
- Research Article
- 10.1016/j.eswa.2025.130238
- Mar 1, 2026
- Expert Systems with Applications
- An Lao + 4 more
Dynamic lifecycle induced authenticity analysis for multi-modal fake news detection
- Research Article
- 10.1016/j.eij.2026.100886
- Mar 1, 2026
- Egyptian Informatics Journal
- Ahmet Okan Arık + 2 more
LLM-based data augmentation for text classification on imbalanced datasets: A case study on fake news detection
- Research Article
- 10.1016/j.array.2026.100687
- Mar 1, 2026
- Array
- Muhammad Wasim + 6 more
Approximately half of the global population relies on social media platforms such as Facebook, Twitter, and Instagram for news consumption. The vast volume and rapid dissemination of information on these platforms pose substantial challenges for the timely and accurate detection of fake news. Academics are working harder to develop AI-based automated systems to check news accuracy because of the detrimental effects of misinformation on public health, social trust, and political stability. But the majority of false news detection methods currently in use focus primarily on content-based features, often ignoring essential factors such as user profiling, social context, and knowledge extraction. The knowledge-based features necessary for effective document retrieval, position identification, social engagement analysis, and user profile integration are often absent from datasets, even though some of them contain elements of social context and user behavior. This work offers a thorough, fully annotated dataset that integrates user profiles, stance information, social engagements, knowledge extraction, and content elements into a single resource to overcome these limitations. Building on this dataset, this study creates KeepUp, a unified system that integrates user profiles, social media activity, and knowledge extraction to detect bogus news. KeepUp outperforms all baseline models, achieving a detection accuracy of 0.78, demonstrating the effectiveness of this combined approach.
- Research Article
3
- 10.1016/j.ipm.2025.104391
- Mar 1, 2026
- Information Processing & Management
- Zhenhua Tan + 1 more
Emotion-semantic interaction network for fake news detection: Perspectives on question and non-question comment semantics
- Research Article
1
- 10.1016/j.techsoc.2025.103091
- Mar 1, 2026
- Technology in Society
- João Varela Da Costa + 2 more
Corporate fake news impacts: A reference model
- Research Article
- 10.1016/j.ipm.2025.104479
- Mar 1, 2026
- Information Processing & Management
- Yihong Meng + 4 more
Dynamic hierarchical memory improved mixture-of-experts for multimodal fake news detection
- Research Article
- 10.1016/j.ipm.2025.104432
- Mar 1, 2026
- Information Processing & Management
- Fang Liu + 4 more
Enhancing fake news video detection with self-driven question–answer from LMMs