Abstract

Purpose of ReviewThere is much debate in machine ethics about the most appropriate way to introduce ethical reasoning capabilities into robots and other intelligent autonomous machines (IAMs). The main problem is that hardwiring intelligent and cognitive robots with commands not to cause harm or damage is not consistent with the notions of autonomy and intelligence. Also, such hardwiring does not leave robots with any course of action if they encounter situations for which they are not programmed or where some harm is caused no matter what course of action is taken.Recent FindingsRecent developments in intelligent autonomous vehicle standards have led to the identification of different levels of autonomy than can be usefully applied to different levels of cognitive robotics. In particular, the introduction of ethical reasoning capability can add levels of autonomy not previously envisaged but which may be necessary if fully autonomous robots are to be trustworthy. But research into how to give IAMs an ethical reasoning capability is a relatively under-explored area in artificial intelligence and robotics. This review covers previous research approaches involving case-based reasoning, artificial neural networks, constraint satisfaction, category theory, abductive logic, inductive logic, and fuzzy logic.SummaryThis paper reviews what is currently known about machine ethics and the way that cognitive robots as well as IAMs in general can be provided with an ethical reasoning capability. A new type of metric-based ethics appropriate for robots and IAMs may be required to replace our current concept of ethical reasoning being largely qualitative in nature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call