Abstract

ABSTRACT We might create artificial systems which can suffer. Since AI suffering might potentially be astronomical, the moral stakes are huge. Thus, we need an approach which tells us what to do about the risk of AI suffering. I argue that such an approach should ideally satisfy four desiderata: beneficence, action-guidance, feasibility and consistency with our epistemic situation. Scientific approaches to AI suffering risk hold that we can improve our scientific understanding of AI, and AI suffering in particular, to decrease AI suffering risks. However, such approaches tend to conflict with either the desideratum of consistency with our epistemic situation or with feasibility. Thus, we also need an explicitly ethical approach to AI suffering risk. Such an approach tells us what to do in the light of profound scientific uncertainty about AI suffering. After discussing multiple views, I express support for a hybrid approach. This approach is partly based on the maximization of expected value and partly on a deliberative approach to decision-making.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call