This paper uses computational methods to simultaneously investigate the epistemological effects of misinformation on communities of rational agents, while also contributing to the philosophical debate on ‘higher-order’ evidence (i.e. evidence that bears on the quality and/or import of one’s evidence). Modelling communities as networks of individuals, each with a degree of belief in a given target proposition, it simulates the introduction of unreliable mis- and disinformants, and records the epistemological consequences for these communities. First, using small, artificial networks, it compares the effects, when agents who are aware of the prevalence of mis- or disinformation in their communities, either deny the import of this higher-order evidence, or attempt to accommodate it by distrusting the information in their environment. Second, deploying simulations on a large(r) real-world network, it observes the impact of increasing levels of misinformation on trusting agents, as well as of more minimal, but structurally targeted, unreliability. Comparing the two information processing strategies in an artificial context, it finds that there is a (familiar) trade-off between accuracy (in arriving at a correct consensus) and efficiency (in doing so in a timely manner). And in a more realistic setting, community confidence in the truth is seen to be depressed in the presence of even minimal levels of misinformation.
Read full abstract