The discipline of historiography, understood as the scientific exploration of the past, has developed much earlier in time than that of futurology, i.e. the methodologically rigorous examination of the future. Yet anticipating the future has arguably always had more practical importance than knowing and understanding the past. Hence anticipation is a crucial aspect of deliberation—the rational reflection on and organization of our action—and indeed ethics. Even in Kantian ethics, with its seemingly utter disregard for the real-world consequences of our actions, anticipating hypothetical future scenarios appears to be an important element of the rational exercise of figuring out whether maxims are universalizable. Even so, the gap between the huge aggregate of rigorous studies of the past and the cautious beginnings of critical analyses and thorough assessments of future scenarios is striking. No surprise then that we know a lot about the past and only very little about the future. Take technology as an example, arguably one of the contemporary phenomena most significantly reshaping the human condition. There is a colossal amount of knowledge about the history, and indeed the prehistory of technology. Studying the historical and archeological data and looking at the way technology has developed since it started with the use of simple stone tools more than two million years ago, it is hard to resist the impression that the pace of technological innovation overall seems to have been accelerating over time. Indeed, in the Paleolithic technological progress must have been a difficult concept to come up with, since nobody would ever witness any significant technological change during his or her lifetime. Today in contrast, many people lament the speed of technological advance. It really does affect everybody, so much so that it sometimes seems difficult to keep up with evernew changes. Understandably, speculation about various new technology-induced vistas of the future is burgeoning. In the early 1990s Vernor Vinge published his essay on the technological singularity (Vinge 1993). Reflecting on the future of technology he predicted the rise of superhumanly intelligent entities within the next 30 years, explored different scenarios of how this might occur (AI, enhanced human brains etc.) and looked into the potential effects of this development. He argued that the singularity would likely mean a huge acceleration of further technological progress, because innovation would henceforth be driven by powers more intelligent than anything currently known to humans, making many of our present models of reality obsolete (Vinge 1993). Ray Kurzweil later maintained that the singularity would involve a merger of human beings and technology, and more generally a fierce technological change so much so that it would represent a ‘‘rupture in the fabric of human history’’ (Kurzweil 2005, 9). He even set a date when the singularity would actually occur: the year 2045. Unsurprisingly Vinge’s and Kurzweil’s ideas have drawn a range of critical reactions pointing out various purported flaws and weaknesses in their respective singularity prognostications. More recently the singularity debate has gained momentum with two contributions. First in 2012 the Journal of Consciousness Studies published a special double-issue devoted to the singularity (Volume 19, Issue 1–2, 2012) containing various responses to David Chalmers’ philosophical analysis of the idea (Chalmers, 2010). B. Gordijn (&) Dublin, Ireland e-mail: bert.gordijn@dcu.ie