Abstract
AbstractPart of the literature on machine ethics and ethical artificial intelligence focuses on the idea of defining autonomous ethical agents able to make ethical choices and solve dilemmas. While ethical dilemmas often arise in situations characterized by uncertainty, the standard approach in artificial intelligence is to use rational choice theory and maximization of expected utility to model how algorithm should choose given uncertain outcomes. Motivated by the moral proxy problem, which proposes that the appraisal of ethical decisions varies depending on whether algorithms are considered to act as proxies for higher- or for lower-level agents, this paper introduces the moral prior problem, a limitation that, we believe, has been genuinely overlooked in the literature. In a nutshell, the moral prior problem amounts to the idea that, beyond the thesis of the value-ladenness of technologies and algorithms, automated ethical decisions are predetermined by moral priors during both conception and usage. As a result, automated decision procedures are insufficient to produce ethical choices or solve dilemmas, implying that we need to carefully evaluate what autonomous ethical agents are and can do, and what they aren’t and can’t.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.