Algorithms under the Reign of Probability Theodora J. Dryer (bio) In a 1930 issue of Scientific Monthly, US mathematician Warren Weaver proclaimed the twentieth century as the Reign of Probability.1 He argued that "it is nearly impossible to escape from this mathematical goddess of chance,"1 as probability tools were then being designed to quantify uncertainty across all areas of statistical research, even in slippery calculations of quantum particles. Weaver's proclamation was a reflection of a growing embrace of new "probability tools" in statistical research and governance throughout interwar Europe, India, and the US. But it was also a future-looking declaration of an ascendant epistemological power over modern society. It was a promise that wide-ranging forms of knowledge would be subject to probability—its dominion stretching from the seemingly banal instruments of life insurance to the highest theoretical sciences. Probability would command all domains of data-driven inquiry. The interwar turn toward uncertainty management reverberated throughout science, industry, and government. It was a movement that produced powerful and long-lasting social infrastructures toward a world dictated by uncertainty. And we are currently experiencing this movement in revival under its big data iteration: algorithmic uncertainty. MAPPING UNCERTAINTY To understand the shifting powers of uncertainty management, we must map out its epistemological, technological, and political meanings. As a base definition, probability describes likelihoods of propositions and events, usually expressed as a percentage, where perfect certainty is 1 and uncertainty is < 1. Beyond this, probabilistic knowing is a commitment to greater analytic (laws, axioms, and definitions) and technological (computers and data systems) architectures needed to express limited information in terms of likelihoods. Under conditions of limited information, elaborate mathematical and social infrastructures are needed to sustain the probabilistic worldview. My dissertation examines the significance of a dramatic inflection point in 1920s and 1930s-era uncertainty management, when new probability tools and infrastructures were designed to manage uncertainty in statistical research and governance. Studies of data uncertainty then persisted with drastically different applications throughout Cold War proxy wars. After 1980, computer scientists redesigned uncertainty models to work for big data, and since 2008, this research has ballooned, with over 1.3 million publications on algorithmic-driven uncertainty management. Algorithmic uncertainty constitutes a very different set of technologies and social politics from the interwar years, but there is nonetheless a clear lineage. Significantly, algorithmic uncertainty processes involve layered computer-implemented runs of probability tests and statistical estimations on computer-implemented sampling bodies. [End Page 93] A growing commitment to uncertainty management is resurfacing today, a newly reconfigured language of both truth and objectivity. It has overwritten earlier social doctrines—such as the mechanistic worldview—and stands at the exclusion of alternative possibilities. Today we test the validity of science through computer-simulated assessments of the statistical significance of its findings.2 The Intergovernmental Panel on Climate Change (IPCC) assesses the past, present, and future of climate knowledge in terms of uncertainty logics.3 And uncertainty-modeling practices are increasingly gaining power in finance innovation cultures.4 Through merging uncertainty practices with midcentury capital risk logics, "market makers" now promise to offload market risk accountability entirely to algorithms.5 PREDIGITAL UNCERTAINTY The 1920s and 1930s were a massive inflection point in uncertainty work that established a modernizing conception of data technologies and data-driven governance. What is significant here and what should be asked about algorithmic uncertainty is how uncertainty epistemology came to power and who benefits in a world purportedly dictated by probability. The popular adoption of interwar uncertainty tools was a response to an apparent loss of technocratic control over nineteenth-century modes of statistical governance. The utility of statistical data was put to question as practitioners described both an overabundance and incompleteness to statistical information. Statistics was by then not a science of counting but of estimation, and error was an inherent component of this work. Anxious about a loss of public confidence in data-driven institutions, technocrats sought to command error in statistical estimation. In Weaver's description: all knowledge was undergirded by an "insufficiently accurate and insufficiently extensive body of data" and needed probability tools to render that data into "probability data."1 New probability tools were...
Read full abstract