- New
- Research Article
- 10.37016/mr-2020-194
- Feb 5, 2026
- Harvard Kennedy School Misinformation Review
- Sungha Kang + 2 more
In today’s digital media environment, emotionally resonant narratives often spread faster and stick more firmly than verifiable facts. This paper explores how emotionally charged communication in online controversies fosters not only widespread engagement but also the participatory nature of misinformation. Through a case study of a K-pop controversy, we show how audiences act not just as consumers but as co-authors of alternative narratives in moments of uncertainty. These dynamics reflect a broader trend where emotionally driven discourse increasingly shapes public perception, challenging the role of facts in public debate.
- Research Article
- 10.37016/mr-2020-192
- Dec 22, 2025
- Harvard Kennedy School Misinformation Review
When a bank run, a pandemic, or an election spirals out of control, the spark is often informational. In 2023, rumors online helped accelerate the collapse of Silicon Valley Bank. During COVID-19, false claims about vaccines fueled preventable harms by undermining public trust in health guidance, and election lies in the United States fed into the broader dynamics that culminated in the January 6 Capitol attack. These events reveal that misinformation is not just about false or misleading content, but about how degraded information can destabilize entire social systems. To confront this, we must reframe misinformation as an informational-systemic risk that amplifies volatility across politics, health, and security.
- Research Article
- 10.37016/mr-2020-191
- Dec 16, 2025
- Harvard Kennedy School Misinformation Review
What if nearly everything we think we know about misinformation came from just a sliver of the world? When research leans heavily on online studies from a few wealthy nations, we risk drawing global conclusions from local noise. A WhatsApp group of fishermen, a displaced community in a refugee camp, or a bustling market in the Global South are not marginal examples of information environments; such contexts call for an evolution of how we study misinformation. In this commentary, I argue that progress in misinformation studies requires expanding methodological reach beyond convenience samples, critically reassessing causal assumptions, engaging in participatory intervention design, and incorporating insights from both encrypted and offline information networks to develop more contextually grounded and globally relevant strategies.
- Research Article
- 10.37016/mr-2020-190
- Nov 25, 2025
- Harvard Kennedy School Misinformation Review
- Yevgeniy Golovchenko + 3 more
This research note investigates the aftermath of YouTube's global ban on Russian state-affiliated media channels in the wake of Russia's full-scale invasion of Ukraine in 2022. Using over 12 million YouTube comments across 40 Russian-language channels, we analyzed the effectiveness of the ban and the shifts in user activity before and after the platform’s intervention. We found that YouTube, in accordance with its promise, effectively removed user activity across the banned channels. However, the ban did not prevent users from seeking out ideologically similar content on other channels and, in turn, increased user engagement on otherwise less visible pro-Kremlin channels.
- Research Article
- 10.37016/mr-2020-189
- Nov 10, 2025
- Harvard Kennedy School Misinformation Review
- Sean Guo + 2 more
The development of artificial intelligence (AI) allows rapid creation of AI-synthesized images. In a pre-registered experiment, we examine how properties of AI-synthesized images influence belief in misinformation and memory for corrections. Realistic and probative (i.e., providing strong evidence) images predicted greater belief in false headlines. Additionally, we found preliminary evidence that paying attention to properties of images could selectively lower belief in false headlines. Our findings suggest that advances in photorealistic image generation will likely lead to greater susceptibility to misinformation, and that future interventions should consider shifting attention to images.
- Research Article
- 10.37016/mr-2020-186
- Oct 13, 2025
- Harvard Kennedy School Misinformation Review
- Myojung Chung
What if knowing how social media algorithms work doesn’t make you a more responsible digital citizen, but a more cynical one? A new survey of U.S. young adults finds that while higher algorithmic awareness and knowledge are linked to greater concerns about misinformation and filter bubbles, individuals with greater algorithmic awareness and knowledge are less likely to correct misinformation or engage with opposing viewpoints on social media—possibly reflecting limited algorithmic agency. The findings challenge common assumptions about algorithmic literacy and highlight the need for deeper educational and policy interventions that go beyond simply teaching how algorithms function.
- Research Article
- 10.37016/mr-2020-184
- Oct 8, 2025
- Harvard Kennedy School Misinformation Review
- Amir Karami
In response to the escalating threat of misinformation, social media platforms have introduced a wide range of interventions aimed at reducing the spread and influence of false information. However, there is a lack of a coherent macro-level perspective that explains how these interventions operate independently and collectively. To address this gap, I offer a dual typology through a spectrum of interventions aligned with deterrence theory and drawing parallels from international relations, military, cybersecurity, and public health. I argue that five major types of platform interventions, including removal, reduction, informing, composite, and multimodal, can be mapped to five corresponding deterrence mechanisms—hard, situational, soft, integrated, and mixed deterrence—based on purpose and perceptibility. These mappings illuminate how platforms apply varying degrees of deterrence mechanisms to influence user behavior.
- Research Article
- 10.37016/mr-2020-185
- Oct 6, 2025
- Harvard Kennedy School Misinformation Review
- Fan Yang + 2 more
Outside China, WeChat is a conduit for translating and circulating English-language information among the Chinese diaspora. Australian domestic political campaigns exploit the gaps between platform governance and national media policy, using Chinese-language digital media outlets that publish through WeChat’s “Official Accounts” feature, to reproduce disinformation from English-language sources. These campaigns are situated within local contexts and technological conditions. We show how WeChat content uses emotional-appeal disinformation to capture attention. They rely on familiar but misleading stories to fill knowledge gaps, draw on historical references to ease uncertainty about the future, and downplay or erase Indigenous issues through colonial tropes. These emotionally charged messages are then amplified by WeChat’s algorithm, helping them reach even wider audiences.
- Research Article
3
- 10.37016/mr-2020-182
- Aug 27, 2025
- Harvard Kennedy School Misinformation Review
- Anqi Shao
In February 2025, Google’s AI Overview fooled itself and its users when it cited an April Fool’s satire about “microscopic bees powering computers” as factual in search results (Kidman, 2025). Google did not intend to mislead, yet the system produced a confident falsehood. Such cases mark a shift from misinformation caused by human mistakes to errors generated by probabilistic AI systems with no understanding of accuracy or intent to deceive. With the working definition of misinformation as any content that contradicts the best available evidence, I argue that such “AI hallucinations” represent a distinct form of misinformation requiring new frameworks of interpretations and interventions.
- Research Article
2
- 10.37016/mr-2020-180
- Jul 29, 2025
- Harvard Kennedy School Misinformation Review
- Xinyu Wang + 4 more
A significant body of research is dedicated to developing language models that can detect various types of online abuse, for example, hate speech, cyberbullying. However, there is a disconnect between platform policies, which often consider the author's intention as a criterion for content moderation, and the current capabilities of detection models, which typically lack efforts to capture intent. This paper examines the role of intent in the moderation of abusive content. Specifically, we review state-of-the-art detection models and benchmark training datasets to assess their ability to capture intent. We propose changes to the design and development of automated detection and moderation systems to improve alignment with ethical and policy conceptualizations of these abuses.