Abstract

AbstractAutonomous systems are machines that can alter their behavior without direct human oversight or control. How ought we to program them to behave? A plausible starting point is given by the Reduction to Acts Thesis, according to which we ought to program autonomous systems to do whatever a human agent ought to do in the same circumstances. Although the Reduction to Acts Thesis is initially appealing, we argue that it is false: it is sometimes permissible to program a machine to do something that it would be wrong for a human to do. We advance two main arguments for this claim. First, the way an autonomous system will behave can be known in advance. This knowledge can indirectly affect the behavior of other agents, while the same may not be true at the time the system actually executes its programming. Second, a lack of knowledge of the identities of the victims and beneficiaries can provide a justification during the programming phase that would be unavailable to an agent at the time the autonomous system executes its programming.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call