Abstract

AbstractThe singularity hypothesis is a hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on undersupported growth assumptions. I show how leading philosophical defenses of the singularity hypothesis fail to overcome the case for skepticism. I conclude by drawing out philosophical and policy implications of this discussion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call