Abstract

Witnessing a growing number of increasingly autonomous software agents we interact with or that operate on our behalf under circumstances that are not fully known in advance, we argue that there is a need to provide these agents with moral reasoning capabilities. Looking at the current literature on behaviour constraints and multi-agent (software) systems (MAS), one can distinguish various topics. The first topic concerns the analysis of various forms of restraint and their basis. This topic is at the core of moral philosophy. The second topic concerns the formalized specification of, and the reasoning about the constraints. The research on this topic focuses predominantly on the use of logic, mostly modal logic, and defeasible logic. The last topic is the MAS and implementation related topic of designing a working system in which there are rules that can be enforced and deviant behaviour be detected. Here we argue that all three topics need addressing and strong integration. The moral philosophical analysis is needed to provide a detailed conceptualization of the various forms of behaviour constraint and direction. This analysis goes beyond what is usual in the more technical/design focus. The (modal) logic provides the rigour required to ultimately allow implementation. The implementation itself is the ultimate objective. We outline the three components and demonstrate how they can be integrated. We observe here that we do not intend, or claim, that this moral reasoning is on par with human moral reasoning. Our claim is that the analysis of human moral reasoning may provide a useful model for constraining software agent behaviour. And, as equally important, it is recognizable by humans which is an important characteristic when it comes to ‘human–artificial agent’ interaction. Recognizing and understanding the precise basis for the behaviour constraint in the artificial entity will make the agent more trustful which, in its turn, will facilitate the acceptance of the use of and the interaction with artificial agents.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.