Abstract
As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should only be accorded to entities that have internal mental states. Against this thesis I argue that the requirement of internal states is philosophically unsound as it runs up against the problem of other minds. In place of intentional accounts, I provide a functionalist alternative, which makes conceptual room for the existence of AMAs. The implications of this thesis are that at some point in the future we may be faced with moral situations in which no human being is responsible, but a machine may be. Moreover, this responsibility holds, I claim, independently of whether the agent in question is “punishable” or not.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.