The first formal definition of randomness, seen as a property of sequences of events or experimental outcomes, dates back to Richard von Mises’ work in the foundations of probability and statistics. The randomness notion introduced by von Mises is nowadays widely regarded as being too weak. This is, to a large extent, due to the work of Jean Ville, which is often described as having dealt the death blow to von Mises’ approach, and which was integral to the development of algorithmic randomness—the now-standard theory of randomness for elements of a probability space. The main goal of this article is to trace the history and provide an in-depth appraisal of two lesser-known, yet historically and methodologically notable proposals for how to modify von Mises’ definition so as to avoid Ville’s objection. The first proposal is due to Abraham Wald, while the second one is due to Claus-Peter Schnorr. We show that, once made precise in a natural way using computability theory, Wald’s proposal constitutes a much more radical departure from von Mises’ framework than intended. Schnorr’s proposal, on the other hand, does provide a partial vindication of von Mises’ approach: it demonstrates that it is possible to obtain a satisfactory randomness notion—indeed, a canonical algorithmic randomness notion—by characterizing randomness in terms of the invariance of limiting relative frequencies. More generally, we argue that Schnorr’s proposal, together with a number of little-known related results, reveals that there is more continuity than typically acknowledged between von Mises’ approach and algorithmic randomness. Even though von Mises’ exclusive focus on limiting relative frequencies did not survive the passage to the theory of algorithmic randomness, another crucial aspect of his conception of randomness did endure; namely, the idea that randomness amounts to a certain type of stability or invariance under an appropriate class of transformations.
Read full abstract