Abstract

The increasing prevalence of autonomously operating artificial agents has created the desire and arguably need to equip such agents with moral capabilities. A potential tool to morally sanction an artificial agent as admissible for its tasks is to apply a so-called moral Turing test (MTT) to the machine. The MTT can be supported by a pragmatist metaethics as an iteratively applied and modified procedure. However, this iterative, experimentalist procedure faces a dilemma due to the problem of technological entrenchment. I argue that, at least in certain important domains of application, the justification of artificial moral agents requires their deployment, which may entrench them and thereby undermine the justificatory process by hindering its further iteration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call