Abstract
Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented, it has, in general, been only initial proofs of principle. Here, we address the question of desirable attributes for such systems to improve their real world utility, and how controllers with these attributes might be implemented. We propose that ethically critical machine reasoning should be proactive, transparent, and verifiable. We describe an architecture where the ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model and supports proactive, transparent, and verifiable ethical reasoning. To do so, the reasoning component of the ethical layer uses our Python-based belief–desire–intention (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To prove the principles of our approach, we use a case study implementation to experimentally demonstrate its operation. Importantly, it is the first such robot controller where the ethical machine reasoning has been formally verified.
Highlights
Robots are increasingly autonomous: semiautonomous flying robots are commercially available, and driverless cars are undergoing real-world tests [1]
This trend is expected to continue [2]. Such systems have expanding abilities for making unsupervised decisions. This makes it imperative both that robotic systems are capable of taking human ethical values1 into account when they make decisions, and that mechanisms are in place to guarantee that the behaviour executed by the robot respects those values
We consider the motivation behind ethical reasoning in robots; second, the simulation-based approach to robot anticipation, and how this can provide a robot with the capability to reason about ethical consequences; third, we detail the case for an ‘ethical black box’ (EBB) recorder; fourth, we describe the Beliefs, Desires, Intentions (BDI) paradigm, and why it is suitable for implementation of transparent ethical reasoning; we discuss formal verification, and how we can apply such a methodology to our BDI ethical reasoning
Summary
Robots are increasingly autonomous: semiautonomous flying robots are commercially available, and driverless cars are undergoing real-world tests [1]. Making ethical machine reasoning scrutible in this way enables exposition of the reasoning behind actions taken, and facilitates trust in such systems It is clear, through international efforts such as the developing IEEE P7001 standard on Transparency in Autonomous Systems, that this view is becoming mainstream. While transparency and verifiability are of benefit to most computational systems (including robot controllers in general), we argue that they are of particular importance in ethically-critical systems where the dual importance of both respecting human ethical values and being seen to do so is key to acceptance. Doing so facilitates transparency of the reasoning in the ethical layer In this implementation we have used Asimov’s Laws of Robotics as our code of ethics, chosen not because they represent a viable code of machine ethics, but because they are a well-known and straightforward set of ethical rules that can be used to illustrate our approach. We verify that this system obeys Asimov’s three laws of robotics
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have