Abstract

This article describes key challenges in creating an ethics "for" robots. Robot ethicsis not only a matter of the effects caused by robotic systems or the uses to which they may be put, but also the ethical rules and principles that these systems ought to follow-what we call "Ethics for Robots." We suggest that the Principle of Nonmaleficence, or "do no harm," is one of the basic elements of an ethics for robots-especially robots that will be used in a healthcare setting. We argue, however, that the implementation of even this basic principle will raise significant challenges for robot designers. In addition to technical challenges, such as ensuring that robots are able to detect salient harms and dangers in the environment, designers will need to determine an appropriate sphere of responsibility for robots and to specify which of various types of harms must be avoided or prevented. These challenges are amplified by the fact that the robots we are currently able to design possess a form of semi-autonomy that differs from other more familiar semi-autonomous agents such as animals or young children. In short, robot designers must identify and overcome the key challenges of an ethics for robots before they may ethically utilize robots in practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call