Risk denialism and “alternative facts” as championed by the outgoing US president have cost many lives in the COVID-19 pandemic and are on track to cost many more in the climate catastrophe. After years of growing disconnect between public discourse and reality, how can societies return to a consensual view of established truth? Michael Gross investigates. Risk denialism and “alternative facts” as championed by the outgoing US president have cost many lives in the COVID-19 pandemic and are on track to cost many more in the climate catastrophe. After years of growing disconnect between public discourse and reality, how can societies return to a consensual view of established truth? Michael Gross investigates. After the US presidential election of 2020, as the world grew weary with the incumbent’s refusal to concede defeat, attention turned to his predecessor. The occasion was the publication of the first volume of his presidential memoir, but understandably his views were also being sought about the state of the world at the end of the Trump presidency, which had arguably resulted from a backlash to his own. In an interview with Jeffrey Goldberg, the editor-in-chief of The Atlantic, Barack Obama expressed his concern about the “post-truth” world we live in. “If we do not have the capacity to distinguish what’s true from what’s false, then by definition the marketplace of ideas doesn’t work. And by definition our democracy doesn’t work. We are entering into an epistemological crisis,” Obama said. As Sarah Churchwell at the School of Advanced Study, University of London, UK, elaborated in an essay she prepared at around the same time, “American democracy is facing not merely a crisis in trust, but in knowledge itself, largely because language has become increasingly untethered from reality, as we find ourselves in a swirling maelstrom of lies, disinformation, paranoia and conspiracy theories”. Using climate change as an example, Obama acknowledged that there are discussions to be had about how to tackle it and whether any action would work if important polluters don’t join, but he expressed his bewilderment at the denialist views still prevalent among Republicans in the US: “I don’t know what to say if you simply say, ‘This is a hoax that the liberals have cooked up, and the scientists are cooking the books. And that footage of glaciers dropping off the shelves of Antarctica and Greenland are all phony.’ Where do I start trying to figure out where to do something?” As the world faces the combined disasters of climate change and COVID-19, risk denialists are blocking efforts on both fronts. At the time of writing, vaccinations against the coronavirus are beginning in the UK, but lockdown sceptics and opponents of vaccinations (anti-vaxxers) are joining forces in many places to protest against any such evidence-based responses. Germany, for instance, which managed the first wave of the pandemic efficiently and therefore had substantially lower death rates than most European countries, saw a surge of demonstrations over the summer, where a colourful spectrum of denialism, conspiracy theories, and far-right ideology converged. The only link between these different shades of dissidence appeared to be a rejection of any information coming from the country’s democratically elected government and its normally well-respected mainstream media. Thus, faced with two global crises now rather than just one, and a growing number of citizens apparently living in an alternative reality and denying established facts, the question for the world’s democracies is more urgent than ever: how can the electorate be persuaded to reject misinformation and accept the scientifically proven state of reality? In a recent analysis of the problem of denialist beliefs, Helen De Cruz at the University of Saint Louis, USA, identified two contradicting interpretations prevalent in the literature and offered a third way that combined both (Soc. Epistemol. (2020) 34, 440–452). In complex scientific issues such as climate change or disease prevention, the lay person faces the dilemma of having to decide between conflicting testimonies. In the current post-truth crisis, false statements discredited by scientific evidence have been spread and amplified through social contacts both in person and in online networks. Social scientists and psychologists have pointed out that individuals may have valid reasons to trust the information of their neighbourhood WhatsApp group more than the editorials of Nature and Science. Trusting the people you know rather than the experts you don’t know can be a rational decision if you suspect that strangers may want to deceive you or take advantage of you. In this interpretation proposed by Neil Levy at the University of Oxford, UK, getting good information may just be a question of whom you know (Synthese (2019) 196, 313–327). Levy describes US liberals as epistemically luckier than conservatives, but the same could be said about the readership of this journal. We as scientists are lucky, as we know lots of other scientists and get reliable information from people and sources we trust. In the current US situation, however, where trust in science has become an increasingly partisan issue since the 1980s, Levy considers conservatives to be epistemically unlucky, as the people they trust are likely to be passing on misinformation. The alternative view is the cultural cognition hypothesis, championed by Dan Kahan at Yale University, USA, and others (J. Risk Res. (2011) 14, 147–174), which holds that cultural values mainly drive the assessment of risk evaluation and factual information. When cultural identification with the in-group trumps the objective truthfulness of information, anybody offering factually correct information may by this very act define themselves as an out-group member and will therefore not be trusted. As De Cruz explains with examples from the history of religious schisms, the very fact that group members believe (or claim to believe) in something that everybody else would dismiss as fanciful can serve as a powerful identifier of belonging. A more recent dramatic example is the selection of the 2020 UK government on the basis of ardent belief in the Brexit project. De Cruz reconciles the epistemic and the cultural explanation of denialism arguing that both factors are required to explain the full range of phenomena observed, including the increasing polarisation on important scientific issues, on which Levy’s rational preference of peer testimony builds but which it doesn’t explain. Psychologists have long studied how people change their mind in response to information obtained from peers. A simple and well-established model is the estimation task, where participants are asked to estimate a quantity and later to revise their estimate in response to new information, such as the estimates of other participants. In a recent variation on the theme, Lucas Molleman and colleagues at the Max Planck Institute for Human Development at Berlin, Germany, and the University of Amsterdam, Netherlands, adapted the estimation task to the kind of information input people get from social media, where conflicting information is presented and some may be quite obviously false (Proc. R. Soc. B (2020) 287, 20202413). After experimentally changing the distribution of the estimates that the participants were given by three ‘peers’ who were really part of the set-up, the researchers found that participants were most likely to adjust their estimates when those from the peer group were in close agreement with each other and not too different from their own. “Our experiment quantifies how people weigh their own prior beliefs and the beliefs of others. In our context, there is actually no reason to assume that one’s own estimate is better than anyone else’s. But what we see here is an effect known in psychology as ‘egocentric discounting’ — namely that people put more weight on their own beliefs than on those of others,” co-author Alan Novaes Tump from the Max Planck Institute said. “What’s more, our study reveals that this weighting is strongly impacted by the consistency of others’ beliefs with one’s own: people are more likely to heed information that confirms their own beliefs.” Based on the quantitative information obtained from these experiments, the researchers created models to predict the response of people confronted with a range of disparate information. In particular they were interested in how the weighting in favour of similar opinions enhanced by algorithms produces the so-called filter-bubble effect (Curr. Biol. (2015) 25, R255–R258). They anticipate that such models could help to fight polarisation of opinion in online networks. While some kinds of bias such as confirmation bias are built into the human mind for evolutionary reasons, the technology currently used in online social networks has a habit of turbo-charging these biases, as Filippo Menczer at Indiana University Bloomington, USA, and Thomas Hills at the University of Warwick, UK, have noted (Sci. Am. (2020) 323, 6, 54–61). Due to the permanent information overload produced by modern communication technology, described by the authors as the “attention economy”, users are particularly vulnerable to the systematic exploitation of such weaknesses in decision making. The groups of Menczer and Hills have analysed large datasets to study the formation of “echo chambers” where people are exposed to an artificially narrowed spectrum of opinion. They have also made simulations to recreate such effects and better understand them quantitatively. In these, they could describe how factors such as social influence and following/unfollowing can lead to a rapidly increasing polarisation and segregation of communities. The crisis after the 2020 presidential election in the US has demonstrated the urgent need to re-establish some sort of democratic understanding. Although the election result was very clear once all the votes were counted, a large fraction of Republican voters continued to believe in baseless fraud claims swirling in their online communities. The Bright Line Watch survey in November found that, even after the result was officially confirmed, nearly half the Republican voters questioned still expected the result to be overturned. While voters don’t have to agree on who should be president, it is essential for the basic functionality of a democracy that they agree on what the result of an election says. Elsewhere, the efforts to limit the damage caused by the global COVID-19 pandemic are increasingly undermined by misinformation — sometimes spread by the same circles that also want Donald Trump to stay in the White House. With political stability in the US and public health around the world being endangered by the social spread of misinformation, what can be done to stop this? Based on her analysis combining rational and cultural reasons to believe the peer group more than the scientific consensus, De Cruz suggests three strategies that address one, the other or both. By improving the message, she argues, such as explaining mechanistic workings rather than just offering naked facts, science communicators could win over those who care about the factual correctness but would not believe them if it were just one claim against a contradicting one. To pick an example from biology, many lay people struggle with the idea of speciation, having one species at one point and two species at a later time. Explanation of the known mechanisms by which chemistry and biochemistry operate in making formerly compatible populations incompatible might help sceptics to understand that the science of evolution doesn’t require them to believe in miracles. To reach those denialists who are in it for the social sense of belonging to the group, De Cruz suggests improving the messenger instead. Thus, scientists with a religious affiliation could convince their fellow believers that acceptance of the scientific consensus may be compatible with their cultural identity provided by the religion. The third strategy concerns the communications landscape, which in recent years has served conspiracy theories too well. Problems that have already been identified include the undue amplification of maverick views both by traditional media eager to retain their customers and by online media platforms where algorithms boost content based on its click rates regardless of its truthfulness. Menczer’s institute at Indiana University has developed algorithmic tools to detect and curb the influence of bots and other inauthentic agents. While individuals can already use such tools — some of which are available as mobile phone apps — to protect themselves from misinformation, a widespread and systematic use would be necessary to improve the communication landscape. In the run-up to the 2020 presidential election, social media giants Facebook and Twitter started marking and removing posts containing misinformation, and this is a start. More work to improve the communications landscape as well as the messages and messengers will be needed to stop our globalised culture from slipping deeper into the swamp of misinformation.

Full Text

Published Version
Open DOI Link

Get access to 115M+ research papers

Discover from 40M+ Open access, 2M+ Pre-prints, 9.5M Topics and 32K+ Journals.

Sign Up Now! It's FREE

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call