Abstract

Abstract Longtermists argue we should devote much of our resources to raising the probability of a long happy future for sentient beings. But most interventions that raise that probability also raise the probability of a long miserable future, even if they raise the latter by a smaller amount. If we choose by maximising expected utility, this isn’t a problem; but, if we use a risk-averse decision rule, it is. I show that, with the same probabilities and utilities, a risk-averse decision theory tells us to hasten human extinction, not delay it. What’s more, I argue that morality requires us to use a risk-averse decision theory. I present this not as an argument for hastening extinction, but as a challenge to longtermism.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call