Abstract

By existential risk the Oxford philosopher Toby Ord means the “permanent destruction of human potential.” Actual human extinction is existential, but so would be the irreversible collapse of civilization. In the latter category, for example, catastrophic climate change through a runaway greenhouse effect could yield such a future, with human population reduced to a remnant left clinging to life; a world-wide totalitarian regime, self-perpetuating through “technologically enabled indoctrination, surveillance, and enforcement,” would also count. In this book, Ord lays out the range of existential threats, both familiar and novel, and offers a well-documented and (where documents fail) well-reasoned assessment of their various likelihoods. His bottom line: “Given everything I know, I put the existential risk this century at around one in six: Russian roulette” (p. 30). Alarming enough, and if continued, as he says, an unsustainable level of risk, “unlikely to last more than a small number of centuries.” For humans to survive over the longer term it will have to be greatly lowered. The period we are living in now, with humanity at high risk of destroying itself, Ord calls the Precipice. Others have trod this ground. John Leslie's The End of the World: The Science and Ethics of Human Extinction (1996)—reviewed in PDR 23 no. 4—was an early entrant in the genre. Leslie's treatment, more casual than Ord's, arrived at a roughly 70 percent overall chance of avoiding extinction over the next five centuries, somewhat higher than Ord but in the same ballpark. In Ord's enumeration, anthropogenic risks are the main threat to survival, vastly exceeding natural risks. The largest natural existential risk is eruption of a supervolcano like the ones that created the Yellowstone caldera in Wyoming and Lake Toba in Sumatra, which is put at a one in 10,000 chance over the next century. (Existential risk from an asteroid collision is far smaller.) In contrast, anthropogenic risks are one or even two orders of magnitude greater. An existential catastrophe this century through nuclear war or from climate change are both assessed at one in 1,000; from a human-engineered pandemic, one in 30. Without the condition of irreversibility, of course, these risks would be much greater. Most threatening of all in Ord's account, though also the most speculative, is the possible malign consequence of the development of artificial general intelligence (AGI) to a degree that exceeds human levels, a prospect the “expert community” on average evidently considers achievable, more likely than not by the end of the century. An AGI system “optimized toward inhuman values,” could arrogate an ever-increasing share of power, with humanity, in effect, ceding its control. We may then face “a deeply flawed or dystopian future locked in forever.” The judged risk for the century: one in ten. The risk assessment exercise points to where remedial efforts need to be directed, and to their urgency. The agenda is straightforward, calling for improvements in international coordination on security, devising institutions that take greater account of the interests of future generations, and strengthening the governance of potentially dangerous new technologies. Such efforts are grossly underresourced: “we can state with confidence that humanity spends more on ice cream every year than on ensuring that the technologies we develop do not destroy us” (p. 58). That might seem to wrap up the author's task. But Ord's vision, spelled out in a final chapter, is more expansive. With existential security attained and humanity's potential secured, “we will be past the Precipice, free to contemplate the range of futures that lie open before us… the grand accomplishments our descendants might achieve with eons and galaxies as their canvas” (pp. 190–191). The time horizon is unlimited: mammal species in the fossil record typically last a million years. For us, therefore, “almost all humans who will ever live have yet to be born” (p. 43). (Interestingly, this directly contradicts an argument of Leslie, who applied a version of the so-called anthropic principle—that we today should be seen as temporally average, not exceptionally early, observers among all humans past and future—to conclude that an ultra-long human future is highly improbable.) Ord's future has no place for mundane demography, which might worry about sustainable net reproduction rates, or for regional differentiation, which might bring in geopolitics. Indeed, a radical impartialism prevails: all lives matter, and not just spatially, as in Peter Singer's One World: The Ethics of Globalization (2002) or in Ord's own innovative project on “effective altruism,” but also over time: “people matter equally regardless of their temporal location” (p. 44). Ethical purism accords massive weight to future generations. The book ends with seven meaty appendices, on topics such as the purported inadmissibility of time discounting, past nuclear weapons accidents, and the value of protecting humanity (with existential risk formalized as a hazard rate, r).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call