Abstract

Abstract As AIs acquire greater capacities, the issue of whether AIs would acquire greater moral status becomes salient. This chapter sketches a theory of moral status and considers what kind of moral status an AI could have. Among other things, the chapter argues that AIs that are alive, conscious, or sentient, or those that can feel pain, have desires, and have rational or moral agency should have the same kind of moral status as entities that have the same kind of intrinsic properties. It also proposes that a sufficient condition for an AI to have human-level moral status and be a rightsholder is when an AI has the physical basis for moral agency. This chapter also considers what kind of rights a rightsholding AI could have and how AIs could have greater than human-level moral status.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call