Abstract
In the near future, the capabilities of commonly used artificial systems will reach a level where we will be able to permit them to make moral decisions autonomously as part of their proper daily functioning—autonomous cars, personal assistants, household robots, stock trading bots, autonomous weapons, etc. are examples of the types of systems that will deal with simple to complex moral situations that require some level of moral judgment. In the research field of machine ethics, we distinguish several types of artificial moral agents, each of which has a different level of moral agency. In this paper, we focus on the moral agency of Explicit and Full-blown artificial moral agents. We form an opinion regarding their level of moral agency, and then examine the question of whether it is morally right to align the values of (artificial) moral agents. If we assume or are able to determine that certain types of artificial agents are indeed moral agents, then we ought to examine whether it is morally right to construct them in such a way that they are “committed” to human values. We discuss an analogy to human moral agents and the implications of granting or denying moral agency from artificial agents.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.