Reviewing the Framework of Blockchain in Fake News Detection
In the social media environment, fake news is a significant issue. It might be online or offline, depending on the field of journalism. Concerns have been expressed by media and publishing houses, who are looking for solutions to the problem. One of the solutions the industry has to offer in this area is Blockchain. It could be digital security trading, source or identity verification, or quotes following a certain news piece, photo, or video. It's miles of shared document generation to deliver timely files, and it's done with the help of a specific article, video, or image that has been addressed. This will no longer assist the fact abuser in verifying the details. This will help the fact abuser confirm the details, but it will also offer documentation of metadata generated at all phases. It allows you to cut the expense of disseminating false information by forwarding and explicit disclosure to persons who have first-hand knowledge of the subject. The proposed structure for acquiring fake news is supported by the blockchain age, which allows news organizations to deliver their content to their subscribers transparently. This framework was created for journalists and can be integrated into any current platform to publish a news piece and include asset statistics.
- Research Article
189
- 10.1609/icwsm.v14i1.7329
- May 26, 2020
- Proceedings of the International AAAI Conference on Web and Social Media
Consuming news from social media is becoming increasingly popular. However, social media also enables the wide dissemination of fake news. Because of the detrimental effects of fake news, fake news detection has attracted increasing attention. However, the performance of detecting fake news only from news content is generally limited as fake news pieces are written to mimic true news. In the real world, news pieces spread through propagation networks on social media. The news propagation networks usually involve multi-levels. In this paper, we study the challenging problem of investigating and exploiting news hierarchical propagation network on social media for fake news detection.In an attempt to understand the correlations between news propagation networks and fake news, first, we build hierarchical propagation networks for fake news and true news pieces; second, we perform a comparative analysis of the propagation network features from structural, temporal, and linguistic perspectives between fake and real news, which demonstrates the potential of utilizing these features to detect fake news; third, we show the effectiveness of these propagation network features for fake news detection. We further validate the effectiveness of these features from feature importance analysis. We conduct extensive experiments on real-world datasets and demonstrate the proposed features can significantly outperform state-of-the-art fake news detection methods by at least 1.7% with an average F1>0.84. Altogether, this work presents a data-driven view of hierarchical propagation network and fake news and paves the way towards a healthier online news ecosystem.
- Conference Article
18
- 10.1109/icomet48670.2020.9074071
- Jan 1, 2020
Social media is one of the major platforms to get news and information. However, it also provides convenience for widespread of fake news. The reason at the back of fake news is to create hype in order to get the audience's attention and build negative impact on society. The fake news detection is necessary to purify the Internet environment. Various machine learning based detection algorithms are designed to detect fake news. We use attention-based transformer model on publically available dataset for detection of fake and real news. This research aims to test and compare state-of-the-art algorithm and our proposed technique in detection of fake and real news. Our result shows that 15% of the accuracy in fake news detection is improved by transformer model as compare to Hybrid CNN.
- Research Article
16
- 10.3390/electronics12173676
- Aug 31, 2023
- Electronics
Nowadays, the dissemination of news information has become more rapid, liberal, and open to the public. People can find what they want to know more and more easily from a variety of sources, including traditional news outlets and new social media platforms. However, at a time when our lives are glutted with all kinds of news, we cannot help but doubt the veracity and legitimacy of these news sources; meanwhile, we also need to guard against the possible impact of various forms of fake news. To combat the spread of misinformation, more and more researchers have turned to natural language processing (NLP) approaches for effective fake news detection. However, in the face of increasingly serious fake news events, existing detection methods still need to be continuously improved. This study proposes a modified proof-of-concept model named NER-SA, which integrates natural language processing (NLP) and named entity recognition (NER) to conduct the in-domain and cross-domain analysis of fake news detection with the existing three datasets simultaneously. The named entities associated with any particular news event exist in a finite and available evidence pool. Therefore, entities must be mentioned and recognized in this entity bank in any authentic news articles. A piece of fake news inevitably includes only some entitlements in the entity bank. The false information is deliberately fabricated with fictitious, imaginary, and even unreasonable sentences and content. As a result, there must be differences in statements, writing logic, and style between legitimate news and fake news, meaning that it is possible to successfully detect fake news. We developed a mathematical model and used the simulated annealing algorithm to find the optimal legitimate area. Comparing the detection performance of the NER-SA model with current state-of-the-art models proposed in other studies, we found that the NER-SA model indeed has superior performance in detecting fake news. For in-domain analysis, the accuracy increased by an average of 8.94% on the LIAR dataset and 19.36% on the fake or real news dataset, while the F1-score increased by an average of 24.04% on the LIAR dataset and 19.36% on the fake or real news dataset. In cross-domain analysis, the accuracy and F1-score for the NER-SA model increased by an average of 28.51% and 24.54%, respectively, across six domains in the FakeNews AMT dataset. The findings and implications of this study are further discussed with regard to their significance for improving accuracy, understanding context, and addressing adversarial attacks. The development of stylometric detection based on NLP approaches using NER techniques can improve the effectiveness and applicability of fake news detection.
- Research Article
40
- 10.1155/2021/3434458
- Jul 28, 2021
- Scientific Programming
The exponential growth in fake news and its inherent threat to democracy, public trust, and justice has escalated the necessity for fake news detection and mitigation. Detecting fake news is a complex challenge as it is intentionally written to mislead and hoodwink. Humans are not good at identifying fake news. The detection of fake news by humans is reported to be at a rate of 54% and an additional 4% is reported in the literature as being speculative. The significance of fighting fake news is exemplified during the present pandemic. Consequently, social networks are ramping up the usage of detection tools and educating the public in recognising fake news. In the literature, it was observed that several machine learning algorithms have been applied to the detection of fake news with limited and mixed success. However, several advanced machine learning models are not being applied, although recent studies are demonstrating the efficacy of the ensemble machine learning approach; hence, the purpose of this study is to assist in the automated detection of fake news. An ensemble approach is adopted to help resolve the identified gap. This study proposed a blended machine learning ensemble model developed from logistic regression, support vector machine, linear discriminant analysis, stochastic gradient descent, and ridge regression, which is then used on a publicly available dataset to predict if a news report is true or not. The proposed model will be appraised with the popular classical machine learning models, while performance metrics such as AUC, ROC, recall, accuracy, precision, and f1-score will be used to measure the performance of the proposed model. Results presented showed that the proposed model outperformed other popular classical machine learning models.
- Conference Article
64
- 10.1145/3292522.3326012
- Jun 26, 2019
Despite increased interests in the study of fake news, how to aid users' decision in handling suspicious or false information has not been well understood. To obtain a better understanding on the impact of warnings on individuals' fake news decisions, we conducted two online experiments, evaluating the effect of three warnings (i.e., one Fact-Checking and two Machine-Learning based) against a control condition, respectively. Each experiment consisted of three phases examining participants' recognition, detection, and sharing of fake news, respectively. In Experiment 1, relative to the control condition, participants' detection of both fake and real news was better when the Fact-Checking warning but not the two Machine-Learning warnings were presented with fake news. Post-session questionnaire results revealed that participants showed more trust for the Fact-Checking warning. In Experiment 2, we proposed a Machine-Learning-Graph warning that contains the detailed results of machine-learning based detection and removed the source within each news headline to test its impact on individuals' fake news detection with warnings. We did not replicate the effect of the Fact-Checking warning obtained in Experiment 1, but the Machine-Learning-Graph warning increased participants' sensitivity in differentiating fake news from real ones. Although the best performance was obtained with the Machine-Learning- Graph warning, participants trusted it less than the Fact-Checking warning. Therefore, our study results indicate that a transparent machine learning warning is critical to improving individuals' fake news detection but not necessarily increase their trust on the model.
- Conference Article
3
- 10.1109/ictai52525.2021.00168
- Nov 1, 2021
As news has become an important way to obtain in-formation, the spread of fake news has caused serious social problems, such as misleading readers and damaging the authority of the government. Therefore, fake news detection has become an important field in social network research. One challenge of fake news detection is how to explore the common latent semantics, which are universally implied in fake news. However, the existing methods are not enough for mining this kind of semantic information. Therefore, we proposed a fake news detection framework named Common Latent Semantics Matching Model (CLSMM), which improves the performance of fake news detection by utilizing common latent semantics in fake news. First, we use BERT model to extract common latent semantics of fake news and use summary generation model to extract distinct latent semantics among each piece of news. Second, we rank the semantic credibility score according to the matching degree of the two kinds of latent semantics mentioned above. Finally, these semantic credibility scores are injected into a fake news classifier to improve the detection performance. Experiments are based on two large scale real-world social media datasets, namely Liar and BuzzFeed. The experimental results show that our model can outperform the accuracy of the state-of-the-art methods by 2.7% and 17.26% on Liar and BuzzFeed, respectively.
- Conference Article
240
- 10.1145/3404835.3462990
- Jul 11, 2021
Disinformation and fake news have posed detrimental effects on individuals and society in recent years, attracting broad attention to fake news detection. The majority of existing fake news detection algorithms focus on mining news content and/or the surrounding exogenous context for discovering deceptive signals; while the endogenous preference of a user when he/she decides to spread a piece of fake news or not is ignored. The confirmation bias theory has indicated that a user is more likely to spread a piece of fake news when it confirms his/her existing beliefs/preferences. Users' historical, social engagements such as posts provide rich information about users' preferences toward news and have great potential to advance fake news detection. However, the work on exploring user preference for fake news detection is somewhat limited. Therefore, in this paper, we study the novel problem of exploiting user preference for fake news detection. We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling. Experimental results on real-world datasets demonstrate the effectiveness of the proposed framework. We release our code and data as a benchmark for GNN-based fake news detection: https://github.com/safe-graph/GNN-FakeNews.
- Research Article
- 10.1177/14648849251395797
- Nov 6, 2025
- Journalism
The rapid proliferation of fake news poses a significant challenge to information ecosystems, particularly in digital and social media environments. This study investigates the effectiveness of chatbot interventions in assisting users with fake news detection within human-computer communities. Grounded in the Heuristic-Systematic Model, the study employs a 6 (fake news type) × 3 (chatbot intervention strategy) mixed design to examine how different chatbot strategies - fact-checking, contextual explanations, and authority endorsements - affect users’ ability to identify various types of fake news. The results show that fact-checking is most effective for detecting fabrication and photo manipulation, contextual explanations enhance recognition of satire and parody-based fake news, and authority endorsements are particularly useful in countering propaganda. These findings highlight the importance of tailoring chatbot interventions to specific fake news types.
- Conference Article
4
- 10.1109/pst55820.2022.9851990
- Aug 22, 2022
The spread of digital disinformation (aka "fake news") is arguably one of the most significant threats on the Internet today which can cause individual and societal harm of large scales. The susceptibility to fake news attacks hinges on whether or not Internet users perceive a fake news article/snippet to be legitimate (real) after reading it. In this paper, we attempt to garner an in-depth understanding of users’ susceptibility to text-centric fake news attacks via a neuro-cognitive methodology (thus corroborating as well as extending the traditional behavioral-only approach in significant ways). In particular, we investigate the neural underpinnings relevant to fake vs. real news through EEG, a well-established brainimaging technique. We design and run an EEG experiment with human users to pursue a thorough investigation of users’ perception and cognitive processing of fake vs. real news. We analyze the neural activity associated with the fake vs. real news detection task for different categories of news articles.Our results show that there may be no statistically significant or automatically inferable differences in the way the human brain processes the fake vs. real news, while marked differences are observed when people are subject to (real or fake) news vs. resting state and even between some different categories of fake news. This neurocognitive finding may help to justify users’ susceptibility to fake news attacks, as also confirmed from the behavioral analysis. In other words, the fake news articles may seem almost indistinguishable from the real news articles in both behavioral and neural domains. Our work serves to dissect the fundamental neural phenomena underlying fake news attacks and explains users’ susceptibility to these attacks through the limits of human biology. We believe that this could be a notable insight for the researchers and practitioners suggesting that the human detection of fake news might be ineffective, which may also have an adverse impact on the design of automated detection approaches that crucially rely upon human labeling of text articles for building training models.
- Conference Article
4
- 10.1109/icdcece53908.2022.9793155
- Apr 23, 2022
Fake news of social media is growing rapidly. The exponential growth and clean get right of entry to of the facts available on social media networks has made it elaborate to distinguish among fake and real news. Detecting fake news is very important. To identify the fake news detection techniques are proposed in Machine Learning and Deep Learning. In this Recurrent Neural Network method is used to determine whether or not the information is actual or fake information. Fake news will mislead and create wrong perceptions among the people. This paper explores different textual properties which are used to distinguish between real and fake news. In this, datasets of fake and true news are used to train the model using proposed algorithm. The accuracy of the model will show the efficiency of the system
- Research Article
3
- 10.29304/jqcsm.2024.16.21539
- Jun 30, 2024
- Journal of Al-Qadisiyah for Computer Science and Mathematics
Nowadays, social media has become the key source of information for anyone seeking about current events across the world. This information may be fake or real news. On social media platforms, fake news negatively impacts politics, the economy, and health, and affects the stability of society. The research on fake news detection has received widespread attention in the field of computer science. There are many effective methods of fake news detection technology including natural language processing (NLP) and machine learning techniques, primarily focusing on content analysis and user behavior. While these methods have shown promise, they often fall short in capturing the complex relational and propagation patterns inherent in social networks. Fake news exhibits distinct features such as misleading headlines, and fabricated content, making its detection challenging. To address these issues, Graph Neural Networks (GNNs) have been introduced as a superior solution. GNNs are particularly effective in processing graph-structured data, allowing them to model the intricate connections and dissemination patterns of news in social networks more accurately. This study provides an overview A variety of false information and their characteristics and discusses various techniques and features used in fake news detection. As well as advanced GNN-based techniques and datasets used to implement practical fake news detection systems from multiple perspectives and future research directions. In addition, tables and summary figures help researchers understand the full picture of fake news detection. Finally, the object of this review is to help other researchers improve fake news detection models using GNNs.
- Research Article
20
- 10.32628/ijsrst207376
- Jun 20, 2020
- International Journal of Scientific Research in Science and Technology
<p>In the digital age, fake news has become a well-known phenomenon. The spread of false evidence is often used to confuse mainstream media and political opponents, and can lead to social media wars, hatred arguments and debates.Fake news is blurring the distinction between real and false information, and is often spread on social media resulting in negative views and opinions. Earlier Research describe the fact that false propaganda is used to create false stories on mainstream media in order to cause a revolt and tension among the masses The digital rights foundation DRF report, which builds on the experiences of 152 journalists and activists in Pakistan, presents that more than 88 % of the participants find social media platforms as the worst source for information, with Facebook being the absolute worst. The dataset used in this paper relates to Real and fake news detection. The objective of this paper is to determine the Accuracy , precision , of the entire dataset .The results are visualized in the form of graphs and the analysis was done using python. The results showed the fact that the dataset holds 95% of the accuracy. The number of actual predicted cases were 296. Results of this paper reveals that The accuracy of the model dataset is 95.26 % the precision results 95.79 % whereas recall and F-Measure shows 94.56% and 95.17% accuracy respectively.Whereas in predicted models there are 296 positive attributes , 308 negative attributes 17 false positives and 13 false negatives. This research recommends that authenticity of news should be analysed first instead of drafting an opinion, sharing fake news or false information is considered unethical journalists and news consumers both should act responsibly while sharing any news.</p>
- Research Article
- 10.71058/jodac.v10i02003
- Feb 7, 2026
- Journal of Dynamics and Control
In today’s digital world, fake news spreads very fast through social media and news websites. This fake information can confuse people and lead to wrong decisions. So, it is very important to build a system that can detect fake news in a quick and accurate way. In this study, we focused on fake news detection using simple and advanced methods. We collected both real and fake news data from different platforms such as Facebook, X (Twitter), Instagram, and news websites. The data includes different topics like politics, education, technology, and entertainment. We used machine learning and deep learning models to understand and detect fake news. To make the data more useful, we cleaned it and added labels like “real” or “fake” with the help of trained annotators. We also made sure that the labels were reliable by checking agreement scores using special statistical methods. The models were trained using this labelled data, and their performance was checked using accuracy, precision, recall, and F1 score. We also studied the news headlines and descriptions across different categories. We looked at total words, unique words, and headline length. This helped us understand how fake and real news are written differently. Our final system showed good performance in detecting fake news in the English language. Current research will help in building better tools to identify fake news in English. It can also support journalists, readers, and fact-checkers to understand which news is true and which is not. In the future, we aim to improve this system by adding more news types, including images and videos, and using even smarter models. This study is an important step towards reducing the harmful effects of fake news in society.
- Research Article
123
- 10.1002/asi.24359
- May 4, 2020
- Journal of the Association for Information Science and Technology
Filtering, vetting, and verifying digital information is an area of core interest in information science. Online fake news is a specific type of digital misinformation that poses serious threats to democratic institutions, misguides the public, and can lead to radicalization and violence. Hence, fake news detection is an important problem for information science research. While there have been multiple attempts to identify fake news, most of such efforts have focused on a single modality (e.g., only text‐based or only visual features). However, news articles are increasingly framed as multimodal news stories, and hence, in this work, we propose a multimodal approach combining text and visual analysis of online news stories to automatically detect fake news. Drawing on key theories of information processing and presentation, we identify multiple text and visual features that are associated with fake or credible news articles. We then perform a predictive analysis to detect features most strongly associated with fake news. Next, we combine these features in predictive models using multiple machine‐learning techniques. The experimental results indicate that a multimodal approach outperforms single‐modality approaches, allowing for better fake news detection.
- Research Article
1
- 10.22363/2312-9220-2023-28-2-381-396
- Dec 15, 2023
- RUDN Journal of Studies in Literature and Journalism
In the modern world, people are too techno-friendly and dependent on technology using the Internet for every work. The same goes for the news. People are shifting from traditional mass media to digital news platforms and getting news through websites, news portals, social media, etc. If you are dependent on the Internet for every kind of information, then you will face false information on the Internet. False or fake news is defined as any information that does not have any credible and reliable source behind it or any misleading information that is likely to mislead the public. The aim behind fake news transmission is to damage a person's or entity's reputation or advertising revenue. If you want not to fall into the fake news you should know about fake news detection and media literacy. The main purpose of the study is to check the exposure of fake news awareness and fake news detection methods among social media users. In the current scenario, this is much necessary to know that social media users have the advisable knowledge of fake news detection and media literacy because people easily fall into the rumors. Mob lynching is one of the biggest rumors on the Indian Internet. In this research, the survey method and questionnaire for data collection were used. The questionnaire was distributed randomly over different social media platforms and emails to the intended respondents. The findings obtained reveal that most fake or false news in India is transmitted through WhatsApp, but social media users have adequate knowledge of fake news and media literacy.