Abstract

This chapter explores the possibility of moral artificial intelligence – what it might look like and what it might achieve. Against the backdrop of the enduring limitations of human moral psychology and the pressing challenges inherent in a globalised world, we argue that an AI that could monitor, prompt and advise on moral behaviour could help human agents overcome some of their inherent limitations. Such an AI could monitor physical and environmental factors that affect moral decision-making, could identify and make agents aware of their biases, and could advise agents on the right course of action, based on the agent’s moral values. A common objection to the concept of moral enhancement is that, since a single account of right action cannot be agreed upon, the project of moral enhancement is doomed to failure. We argue that insofar as this is a problem, it is a problem for some biomedical interventions, but an agent-tailored moral AI would not only preserve pluralism of moral values but would also enhance the agent’s autonomy by helping him to overcome his natural psychological limitations. In this way moral AI has one advantage over other forms of biomedical moral enhancement.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.