Modeling the dynamics of misinformation spread: a multi-scenario analysis incorporating user awareness and generative AI impact

  • Abstract
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The proliferation of misinformation on social media threatens public trust, public health, and democratic processes. We propose three models that analyze fake news propagation and evaluate intervention strategies. Grounded in epidemiological dynamics, the models include: (1) a baseline Awareness Spread Model (ASM), (2) an Extended Model with fact-checking (EM), and (3) a Generative AI-Influenced Spread model (GIFS). Each incorporates user behavior, platform-specific dynamics, and cognitive biases such as confirmation bias and emotional contagion. We simulate six distinct scenarios: (1) Accurate Content Environment, (2) Peer Network Dynamics, (3) Emotional Engagement, (4) Belief Alignment, (5) Source Trust, and (6) Platform Intervention. All models converge to a single, stable equilibrium. Sensitivity analysis across key parameters confirms model robustness and generalizability. In the ASM, forwarding rates were lowest in scenarios 1, 4, and 6 (1.47%, 3.41%, 2.95%) and significantly higher in 2, 3, and 5 (19.67%, 56.52%, 29.47%). The EM showed that fact-checking reduced spread to as low as 0.73%, with scenario-based variation from 1.16 to 17.47%. The GIFS model revealed that generative AI amplified spread by 5.7%–37.8%, depending on context. ASM highlights the importance of awareness; EM demonstrates the effectiveness of fact-checking mechanisms; GIFS underscores the amplifying impact of generative AI tools. Early intervention, coupled with targeted platform moderation (scenarios 1, 4, 6), consistently yields the lowest misinformation spread, while emotionally resonant content (scenario 3) consistently drives the highest propagation.

ReferencesShowing 10 of 42 papers
  • Open Access Icon
  • Cite Count Icon 778
  • 10.4269/ajtmh.20-0812
COVID-19-Related Infodemic and Its Impact on Public Health: A Global Social Media Analysis.
  • Aug 10, 2020
  • The American Journal of Tropical Medicine and Hygiene
  • Md Saiful Islam + 11 more

  • Cite Count Icon 892
  • 10.1002/9780470753767
Numerical Methods for Ordinary Differential Equations
  • Mar 7, 2008
  • J C Butcher

  • Open Access Icon
  • Cite Count Icon 1739
  • 10.1073/pnas.1517441113
The spreading of misinformation online
  • Jan 4, 2016
  • Proceedings of the National Academy of Sciences
  • Michela Del Vicario + 7 more

  • Cite Count Icon 44
  • 10.1145/3522756
Fake News Propagation: A Review of Epidemic Models, Datasets, and Insights
  • Aug 31, 2022
  • ACM Transactions on the Web
  • Simone Raponi + 3 more

  • Cite Count Icon 3
  • 10.2196/60024
Impact of Artificial Intelligence-Generated Content Labels On Perceived Accuracy, Message Credibility, and Sharing Intentions for Misinformation: Web-Based, Randomized, Controlled Experiment.
  • Dec 24, 2024
  • JMIR formative research
  • Fan Li + 1 more

  • Open Access Icon
  • Cite Count Icon 2193
  • 10.1177/1529100612451018
Misinformation and Its Correction
  • Sep 17, 2012
  • Psychological Science in the Public Interest
  • Stephan Lewandowsky + 4 more

  • Open Access Icon
  • Cite Count Icon 2174
  • 10.1126/science.aaa1160
Political science. Exposure to ideologically diverse news and opinion on Facebook.
  • May 7, 2015
  • Science
  • Eytan Bakshy + 2 more

  • Cite Count Icon 6
  • 10.1038/s41598-024-69657-0
Epidemic modeling for misinformation spread in digital networks through a social intelligence approach
  • Aug 17, 2024
  • Scientific Reports
  • Sreeraag Govindankutty + 1 more

  • Open Access Icon
  • Cite Count Icon 2
  • 10.5642/codee.201912.01.07
The Mathematics of Gossip
  • Jan 1, 2019
  • CODEE Journal
  • Jessica Deters + 2 more

  • Open Access Icon
  • Cite Count Icon 425
  • 10.1287/mnsc.2019.3478
The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings
  • Nov 1, 2020
  • Management Science
  • Gordon Pennycook + 3 more

Similar Papers
  • Front Matter
  • 10.1093/9780198945215.003.0184
Navigating Cognitive Bias and Information Integrity in AI-Driven Digital Media Ecologies
  • Aug 8, 2025
  • Toija Cinque + 1 more

AI models, particularly generative AI and large language models, reshape digital information ecosystems by curating and amplifying content through user engagement metrics. Despite their capacity to reduce bias and promote inclusivity, these models simultaneously amplify cognitive biases, entrench filter bubbles, and spread misinformation. The intensification of human–machine interaction and hyper-industrialization complicates this further, as large language models increasingly mediate how information is produced and consumed. Socio-technical agency describes how AI systems co-construct human behavior and societal norms through their design, yet their effects remain understudied in regions with limited technological infrastructure. This paper investigates AI’s influence on information dissemination, cognitive biases, and user agency across digital media environments in key regions of the Global South. Drawing on qualitative interviews and a survey of 580 media technology users in South Africa, Indonesia, India, the Philippines, and Brazil it examines how generative AI affects emotional engagement, exposure to content, and perceptions of digital truth. Framed by media ecology theory, the study evaluates AI as a cognitive extension that can both reinforce and challenge digital biases. The study proposes strategies for using generative AI to support information integrity while addressing the risks of polarization and exclusion. By centering perspectives from regions in the Global South, it contributes to more equitable discourse on AI governance, advocating regulatory and design solutions responsive to diverse media ecologies.

  • Research Article
  • 10.3390/bs15081011
Whether and When Could Generative AI Improve College Student Learning Engagement?
  • Jul 25, 2025
  • Behavioral Sciences
  • Fei Guo + 3 more

Generative AI (GenAI) technologies have been widely adopted by college students since the launch of ChatGPT in late 2022. While the debate about GenAI’s role in higher education continues, there is a lack of empirical evidence regarding whether and when these technologies can improve the learning experience for college students. This study utilizes data from a survey of 72,615 undergraduate students across 25 universities and colleges in China to explore the relationships between GenAI use and student learning engagement in different learning environments. The findings reveal that over sixty percent of Chinese college students use GenAI technologies in Academic Year 2023–2024, with academic use exceeding daily use. GenAI use in academic tasks is related to more cognitive and emotional engagement, though it may also reduce active learning activities and learning motivation. Furthermore, this study highlights that the role of GenAI varies across learning environments. The positive associations of GenAI and student engagement are most prominent for students in “high-challenge and high-support” learning contexts, while GenAI use is mostly negatively associated with student engagement in “low-challenge, high-support” courses. These findings suggest that while GenAI plays a valuable role in the learning process for college students, its effectiveness is fundamentally conditioned by the instructional design of human teachers.

  • Research Article
  • Cite Count Icon 9
  • 10.5204/mcj.2862
Burden of the Beast
  • Mar 17, 2022
  • M/C Journal
  • Bronwyn Fredericks + 6 more

Burden of the Beast

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 158
  • 10.1186/s13012-024-01357-9
Generative AI in healthcare: an implementation science informed translational path on application, integration and governance
  • Mar 15, 2024
  • Implementation science : IS
  • Sandeep Reddy

BackgroundArtificial intelligence (AI), particularly generative AI, has emerged as a transformative tool in healthcare, with the potential to revolutionize clinical decision-making and improve health outcomes. Generative AI, capable of generating new data such as text and images, holds promise in enhancing patient care, revolutionizing disease diagnosis and expanding treatment options. However, the utility and impact of generative AI in healthcare remain poorly understood, with concerns around ethical and medico-legal implications, integration into healthcare service delivery and workforce utilisation. Also, there is not a clear pathway to implement and integrate generative AI in healthcare delivery.MethodsThis article aims to provide a comprehensive overview of the use of generative AI in healthcare, focusing on the utility of the technology in healthcare and its translational application highlighting the need for careful planning, execution and management of expectations in adopting generative AI in clinical medicine. Key considerations include factors such as data privacy, security and the irreplaceable role of clinicians’ expertise. Frameworks like the technology acceptance model (TAM) and the Non-Adoption, Abandonment, Scale-up, Spread and Sustainability (NASSS) model are considered to promote responsible integration. These frameworks allow anticipating and proactively addressing barriers to adoption, facilitating stakeholder participation and responsibly transitioning care systems to harness generative AI’s potential.ResultsGenerative AI has the potential to transform healthcare through automated systems, enhanced clinical decision-making and democratization of expertise with diagnostic support tools providing timely, personalized suggestions. Generative AI applications across billing, diagnosis, treatment and research can also make healthcare delivery more efficient, equitable and effective. However, integration of generative AI necessitates meticulous change management and risk mitigation strategies. Technological capabilities alone cannot shift complex care ecosystems overnight; rather, structured adoption programs grounded in implementation science are imperative.ConclusionsIt is strongly argued in this article that generative AI can usher in tremendous healthcare progress, if introduced responsibly. Strategic adoption based on implementation science, incremental deployment and balanced messaging around opportunities versus limitations helps promote safe, ethical generative AI integration. Extensive real-world piloting and iteration aligned to clinical priorities should drive development. With conscientious governance centred on human wellbeing over technological novelty, generative AI can enhance accessibility, affordability and quality of care. As these models continue advancing rapidly, ongoing reassessment and transparent communication around their strengths and weaknesses remain vital to restoring trust, realizing positive potential and, most importantly, improving patient outcomes.

  • Research Article
  • 10.36096/ijbes.v7i3.831
Evaluation of generative artificial intelligence (GENAI) as a transformative technology for effective and efficient governance, political knowledge, electoral, and democratic processes
  • Jul 15, 2025
  • International Journal of Business Ecosystem & Strategy (2687-2293)
  • Chiji Longinus Ezeji + 1 more

The incorporation of generative artificial intelligence in governance, political knowledge, electoral, and democratic processes is essential as the world transitions to a digital paradigm. Numerous nations have adopted Generative AI (GenAI), a disruptive technology that compels electoral bodies to advocate for the integration of such tools into governance, electoral, and democratic processes. Nevertheless, these technologies do not ensure effortless integration or efficient usage owing to intricate socio-cultural and human dynamics. Certain African jurisdictions are ill-prepared for the adoption of these technologies due to factors including underdevelopment, insufficient electrical supply, lack of technology literacy, reluctance to change, and the goals of governing parties. This study examines generative artificial intelligence as a disruptive technology for enhancing governance, political knowledge, electoral processes, and democracy. A mixed-method approach was employed, incorporating surveys and in-person interviews. The analysis of data, debates, and interpretation of findings were grounded in postdigital theory and theme analysis employing an abductive reasoning technique, in alignment with the tenets of critical realism. The study demonstrated that GENAI can influence political knowledge, election processes, and enhance efficiency in government and democracy. Moreover, GENAI, including ChatGPT, can either exacerbate or mitigate societal tendencies that contribute to human division, facilitate the dissemination of misinformation, perpetuate echo chambers, and undermine social and political trust, as well as polarise disparate groups or sets of viewpoints or beliefs. AI offers substantial opportunities but also poses many obstacles, including technical constraints, ethical dilemmas, and social ramifications. The swift progression of AI may disrupt labour markets by automating tasks conventionally executed by people, resulting in job displacement. Implementing AI necessitates significant upskilling and proficiency with digital tools; therefore, governments and organisations must adequately train their personnel to reconcile the disparity between AI's capabilities and users' comprehension. Additionally, there is a requisite for governmental oversight, regulation, and monitoring of AI adoption and utilisation across all facets of its implementation.

  • Research Article
  • 10.1111/ssm.18413
Student Engagement and Teacher Perceived Support in STEAM Education Using Generative AI : A Systematic Review and Direction for Future Research
  • Oct 28, 2025
  • School Science and Mathematics
  • Cheuk Kwan Au + 2 more

The emergence of generative AI (GenAI), such as ChatGPT, in education reconceptualizes the realm and is novel to researchers and practitioners alike. Over the past few years, systematic reviews of the impact of GenAI on education have increased, focusing on language education and general education. Such reviews may overlook other integrated disciplines. This impact can be reflected in student engagement (learning outcomes) and teacher perspectives. In response, this review aims to investigate the impact of integrating GenAI on student engagement and teacher‐perceived support in science, technology, engineering, art, and mathematics (STEAM) education. It used a thematic analysis approach to examine relevant articles published over the past 5 years (2020–2024). The findings suggest 11 constructs on how GenAI tools affect the development of student cognitive, behavioral, and emotional engagement. They also suggest three themes about how STEAM teachers felt about GenAI tools—attitude, pedagogy, and 21st‐century skills. We used the findings to suggest recommendations for future directions of GenAI research.

  • Research Article
  • Cite Count Icon 9
  • 10.1177/09579265241251479
ChatGPT-4 as a journalist: Whose perspectives is it reproducing?
  • May 21, 2024
  • Discourse & Society
  • Petre Breazu + 1 more

The rapid emergence of generative AI models in the media sector demands a critical examination of the narratives these models produce, particularly in relation to sensitive topics, such as politics, racism, immigration, public health, gender and violence, among others. The ease with which generative AI can produce narratives on sensitive topics raises concerns about potential harms, such as amplifying biases or spreading misinformation. Our study juxtaposes the content generated by a state-of-the-art generative AI, specifically ChatGPT-4, with actual articles from leading UK media outlets on the topic of immigration. Our specific case study focusses on the representation of Eastern European Roma migrants in the context of the 2016 UK Referendum on EU membership. Through a comparative critical discourse analysis, we uncover patterns of representation, inherent biases and potential discrepancies in representation between AI-generated narratives and mainstream media discourse with different political views. Preliminary findings suggest that ChatGPT-4 exhibits a remarkable degree of objectivity in its reporting and demonstrates heightened racial awareness in the content it produces. Moreover, it appears to consistently prioritise factual accuracy over sensationalism. All these features set it apart from right-wing media articles in our sample. This is further evidenced by the fact that, in most instances, ChatGPT-4 refrains from generating text or does so only after considerable adjustments when prompted with headlines that the model deems inflammatory. While these features can be attributed to the model’s diverse training data and model architecture, the findings invite further examination to determine the full scope of ChatGPT-4’s capabilities and its potential shortcomings in representing the full spectrum of social and political perspectives prevalent in society.

  • Research Article
  • Cite Count Icon 7
  • 10.1109/mcg.2024.3380168
To Authenticity, and Beyond! Building Safe and Fair Generative AI Upon the Three Pillars of Provenance.
  • May 1, 2024
  • IEEE computer graphics and applications
  • John Collomosse + 1 more

Provenance facts, such as who made an image and how, can provide valuable context for users to make trust decisions about visual content. Against a backdrop of inexorable progress in generative AI for computer graphics, over two billion people will vote in public elections this year. Emerging standards and provenance enhancing tools promise to play an important role in fighting fake news and the spread of misinformation. In this article, we contrast three provenance enhancing technologies-metadata, fingerprinting, and watermarking-and discuss how we can build upon the complementary strengths of these three pillars to provide robust trust signals to support stories told by real and generative images. Beyond authenticity, we describe how provenance can also underpin new models for value creation in the age of generative AI. In doing so, we address other risks arising with generative AI such as ensuring training consent, and the proper attribution of credit to creatives who contribute their work to train generative models. We show that provenance may be combined with distributed ledger technology to develop novel solutions for recognizing and rewarding creative endeavor in the age of generative AI.

  • Research Article
  • 10.36358/jce.2025.25.1.1
창의·융합 역량 함양을 위한 생성형 AI 활용 미술교육 모델 연구
  • Mar 31, 2025
  • Korean Society for Creativity Education
  • Jisook Park

This study examines the role of Generative AI in school art education, focusing on enhancing Creativity and Convergence Competency. It explores a Generative AI-based art education model and proposes an Art Game as a practical tool. Effective Generative AI integration strategies were developed based on school environments and student needs. In elementary education, Generative AI-supported visual exploration and creative idea generation were emphasized. Middle and high school programs focused on critical thinking and art appreciation through historical and analytical approaches. At the university level, experimental Generative AI-driven projects incorporated ethical and philosophical discussions to deepen artistic expression. The Persona-based Artist concept in the Art Game encouraged personal and emotional engagement, fostering creative collaboration with Generative AI. By clarifying Generative AI’s role in education, this study provides a framework for its effective use while addressing implementation challenges. It proposes a practical model that enables students to confidently create and innovate with Generative AI, strengthening their Creativity and Convergence Competency.

  • Conference Article
  • Cite Count Icon 18
  • 10.1115/detc2012-71258
Confirmation and Cognitive Bias in Design Cognition
  • Aug 12, 2012
  • Gregory M Hallihan + 2 more

The desire to better understand design cognition has led to the application of literature from psychology to design research, e.g., in learning, analogical reasoning, and problem solving. Psychological research on cognitive heuristics and biases offers another relevant body of knowledge for application. Cognitive biases are inherent biases in human information processing, which can lead to suboptimal reasoning. Cognitive heuristics are unconscious rules utilized to enhance the efficiency of information processing and are possible antecedents of cognitive biases. This paper presents two studies that examined the role of confirmation bias, which is a tendency to seek and interpret evidence in order to confirm existing beliefs. The results of the first study, a protocol analysis involving novice designers engaged in a biomimetic design task, indicate that confirmation bias is present during concept generation and offer additional insights into the influence of confirmation bias in design. The results of the second study, a controlled experiment requiring participants to complete a concept evaluation task, suggest that decision matrices are effective tools to reduce confirmation bias during concept evaluation.

  • Research Article
  • Cite Count Icon 3
  • 10.9734/ajrcos/2024/v17i12533
Exploring Generative AI: Models, Applications, and Challenges in Data Synthesis
  • Dec 13, 2024
  • Asian Journal of Research in Computer Science
  • S Ramalakshmi + 1 more

Generative AI has emerged as a transformative field within artificial intelligence, enabling the creation of new data that mimics real-world information and expands the boundaries of what machines can autonomously generate. This study discuss the various models of generative AI, focusing on Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Auto-Regressive models, each offering distinct approaches and strengths in data generation. VAEs excel in learning latent representations, making them ideal for applications like anomaly detection and data imputation. GANs, renowned for their high-quality image synthesis, have found extensive use in tasks ranging from text-to-image conversion to super-resolution. Auto-Regressive models, on the other hand, are particularly effective in sequential data generation, such as text generation, music composition, and time series prediction. The paper highlights key applications of these models across diverse domains, including image synthesis, text generation, drug discovery, and simulation tasks in fields like healthcare, finance, and entertainment. Additionally, the study emphasizes the evaluation metrics are also called the comparitive parameters crucial for assessing the performance of generative models, such as perceptual quality metrics, Inception Score (IS), and Fréchet Inception Distance (FID), which provide quantitative insights into the quality and diversity of generated data. This study employs a systematic methodology comprising a comprehensive literature review, strategic search queries, and thematic data synthesis to explore generative AI. Key areas of focus include models (VAE, GAN, auto-regressive, flow-based), applications, evaluation techniques, challenges, and recent advances. The analysis identifies emerging trends, novel methods, and critical gaps in the field. This study also compares the performance of three Gen –AI models along with the comparative parameters like data type, Data Type, Applications, Training Complexity, Output Quality, Interpretability, Limitations, Advantages, Computational Cost and Scalability. Generative AI raises ethical concerns, including biases in training data that perpetuate stereotypes and marginalization. It can be misused for harmful purposes like creating deepfakes or spreading misinformation, impacting trust and privacy. Questions of accountability and ownership arise when AI-generated content infringes on intellectual property or causes harm. Addressing these issues is essential for responsible AI deployment.

  • Research Article
  • Cite Count Icon 3
  • 10.1016/j.iheduc.2024.100979
Enhancing student engagement in online collaborative writing through a generative AI-based conversational agent
  • Nov 17, 2024
  • The Internet and Higher Education
  • Wanqing Hu + 2 more

Enhancing student engagement in online collaborative writing through a generative AI-based conversational agent

  • Research Article
  • 10.7759/cureus.91318
Application and Comparative Study of Generative Artificial Intelligence for Epidemic Prediction of Coronavirus Disease
  • Aug 1, 2025
  • Cureus
  • Zongjing Liang + 3 more

Background and objectivesIn the past twenty years, several large-scale coronavirus outbreaks have caused heavy loss of life and serious economic damage worldwide. Current global surveillance suggests that similar epidemics may occur again, making timely and accurate forecasting an urgent priority. Yet, many existing prediction methods, mainly based on traditional statistical or machine learning techniques, still struggle to deliver both speed and precision. This study explores a generative artificial intelligence-driven approach aimed at narrowing these gaps.MethodsNine models (three statistical models, three machine learning models, and three generative artificial intelligence models) were compared using weekly COVID-19 case and death data from the United States (US), the United Kingdom (UK), Germany (GE), and Russia (RU) from March 15, 2020, to April 15, 2023. The statistical models used are simple moving average (SMA), simple exponential smoothing (SES), and the Holt linear trend model (Holt). The machine learning models used are k-nearest neighbor regression (KNN), regression tree (RTree), and multilayer perceptron (MLP). The generative AI models used are ChatGPT, DeepSeek (DS), and Kimi. A custom MATLAB program was used to solve the statistical and machine learning models, and the zero-inference forecasting method was used to solve the generative AI model. According to the stepwise prediction theory, error metrics for one-, two-, and three-step forecasts were calculated: mean absolute percentage error (MAPE), mean absolute error (MAE), and root mean square error (RMSE). The forecasting performance of each model was compared by comparing the one-, two-, and three-step predicting error metrics.ResultsIn our analysis, generative AI models consistently delivered the most accurate forecasts. Kimi, in particular, recorded the smallest errors for death predictions and among the lowest for new cases, while DS and ChatGPT also performed well, clearly surpassing the statistical and machine learning approaches in short-term COVID-19 forecasting.ConclusionThe results of this study demonstrate that generative AI models demonstrate superior predictive accuracy and robustness in epidemic forecasting compared to traditional statistical and machine learning models. This research is innovative in its application of generative AI technology to public health decision-making, demonstrating its robust epidemic forecasting capabilities. Given these proven advantages, public health authorities can integrate generative AI technology into major infectious disease surveillance systems, promote public health data sharing mechanisms, and incorporate generative AI into epidemic intervention and resource allocation. The implementation of these measures will enable governments and regulatory agencies worldwide to use generative AI to enhance early warning capabilities and improve their response to future infectious disease epidemics.

  • Research Article
  • 10.3126/ljbe.v11i1.54321
Investor Bias: A Case of Nepalese Investor Perspective
  • Apr 25, 2023
  • The Lumbini Journal of Business and Economics
  • Bikash Rana

Behavioral finance incorporates the field of psychology into finance and studies the behavior of individual which are guided by behavioral biases. The current study aims to examine the behavioral biases which can be seen in Nepalese stock investor and studies if the behavioral biases affect the financial decisions of investor or not. The study tested the following behavioral bias: Loss Aversion, Overconfidence, Optimism, Mental Accounting, Illusion of Control, Confirmation and Status Quo Bias. The data was collected from 136 respondents. The sample size was set as minimum of 120 on the basis of rule of thumb of Roscoe (1975). Likewise, four in-depth interviews were taken in order to collect response from institutional investor. The number of interviews for institutional investor was determined on the basis of Rao soft Sample Size Calculator. The study showed that Loss aversion, overconfidence and confirmation bias were correlated with financial decision making of the investor. The correlations were significant. But the regression analysis showed that there is influence of loss aversion, overconfidence and optimism bias in the financial decisions. Confirmation bias did not have significant relationship. Also, the behavioral bias as a whole affects the financial decisions. Likewise, the study also showed that status quo bias and mental accounting bias are prevailed in the institutional investor. These biases also influenced the individual investor financial decisions. As a whole the study shows that Nepalese investor are influenced by behavioral biases.

  • Research Article
  • Cite Count Icon 53
  • 10.1089/cyber.2020.0663
The Instagram Infodemic: Cobranding of Conspiracy Theories, Coronavirus Disease 2019 and Authority-Questioning Beliefs.
  • Dec 18, 2020
  • Cyberpsychology, behavior and social networking
  • Emma K Quinn + 2 more

The novel coronavirus 2019 pandemic has brought about an overabundance of misinformation concerning the virus (SARS-CoV-2) and the coronavirus disease 2019 (COVID-19) it causes spreading rapidly on social media. While some more obviously untrustworthy sources may be easier for social media filters to identify and remove, an early feature was the cobranding of COVID-19 misinformation with other types of misinformation. To examine this, the top 10 Instagram posts (in English) were collected every day for 10 days (April 21-30th, 2020) for each of the hashtags #hoax, #governmentlies, and #plandemic. The #hoax was selected first as it is commonly used in conspiracy theory posts, and #governmentlies because it was the most commonly cotagged with #hoax. For comparison, we selected #plandemic as the most popular cotagged hashtag that was clearly COVID-19-related. This resulted in 300 Instagram posts available for our analysis. We conducted a content analysis by coding the themes contained in the posts, both for the images and the text caption shared by the Instagram users (including hashtags). The broad theme of general mistrust was the most common, including the idea that the government and/or media has fabricated or hidden information pertaining to COVID-19. Conspiracy theories were the second-most frequent theme among posts. Overall, COVID-19 was frequently presented in association with authority-questioning beliefs. Developing an understanding of how the public shares misinformation on COVID-19 alongside conspiracy theories and authority-questioning statements can aid public health officials and policymakers in limiting the spread of potentially life-threatening health misinformation.

More from: Frontiers in Computer Science
  • New
  • Research Article
  • 10.3389/fcomp.2025.1692784
CLMOAS: collaborative large-scale multi-objective optimization algorithms with adaptive strategies
  • Nov 6, 2025
  • Frontiers in Computer Science
  • Peng Wang + 7 more

  • New
  • Research Article
  • 10.3389/fcomp.2025.1542813
RWAFormer: a lightweight road LiDAR point cloud segmentation network based on transformer
  • Nov 6, 2025
  • Frontiers in Computer Science
  • Zirui Li + 4 more

  • New
  • Research Article
  • 10.3389/fcomp.2025.1658556
Optimized encoder-based transformers for improved local and global integration in railway image classification
  • Nov 5, 2025
  • Frontiers in Computer Science
  • Lilan Li + 3 more

  • New
  • Research Article
  • 10.3389/fcomp.2025.1570296
Exploring pose estimation in instrumental composition: the Body Fragmented project
  • Nov 5, 2025
  • Frontiers in Computer Science
  • Jenn Kirby

  • New
  • Research Article
  • 10.3389/fcomp.2025.1617597
Deep federated learning: a systematic review of methods, applications, and challenges
  • Nov 4, 2025
  • Frontiers in Computer Science
  • Lakshan Cooray + 3 more

  • New
  • Research Article
  • 10.3389/fcomp.2025.1676362
Anomaly detection in netflow traffic: workflow for dataset preparation and analysis
  • Nov 3, 2025
  • Frontiers in Computer Science
  • Evita Roponena + 2 more

  • Research Article
  • 10.3389/fcomp.2025.1670473
Enhancing IoT security through blockchain integration
  • Oct 29, 2025
  • Frontiers in Computer Science
  • Wafa Shujaa + 2 more

  • Research Article
  • 10.3389/fcomp.2025.1626456
Measuring agility in software development teams: development and initial validation of the Agile Team Practice Inventory for Software Development (ATPI-SD)
  • Oct 27, 2025
  • Frontiers in Computer Science
  • Niklas Retzlaff + 1 more

  • Research Article
  • 10.3389/fcomp.2025.1457563
How relevant are personas in open-source software development?
  • Oct 27, 2025
  • Frontiers in Computer Science
  • Ahmed Chelly + 2 more

  • Research Article
  • 10.3389/fcomp.2025.1597806
Approaches to notation for embodied engagement with a novel neural network-based musical instrument
  • Oct 21, 2025
  • Frontiers in Computer Science
  • Benjamin Keith Bacon + 2 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon