Abstract
Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either human or AI agents engage in virtuous or vicious behavior and experiment participants then judge their level of virtue or vice. The scenarios represent different virtue ethics domains of truth, justice, fear, wealth, and honor. Quantitative and qualitative analyses show that moral attributions are weakened for AIs compared to humans, and the reasoning and explanations for the attributions are varied and more complex. On “relational” views of membership in the moral community, virtuous machines would indeed be included, even if they are indeed weakened. Hence, while our moral relationships with artificial agents may be of the same types, they may yet remain substantively different than our relationships to human beings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.