Abstract

ABSTRACTIn human-agent collectives, humans and agents need to work collaboratively and agree on collective decisions. However, ensuring that agents responsibly make decisions is a complex task, especially when encountering dilemmas where the choices available to agents are not unambiguously preferred over another. Therefore, methodologies that allow the certification of such systems are urgently needed. In this paper, we propose a novel engineering methodology based on formal model checking as a step toward providing evidence for the certification of responsible and explainable decision making within human-agent collectives. Our approach, which is based on the MCMAS model checker, verifies the decision-making behavior against the logical formulae specified to guarantee safety and controllability, and address ethical concerns. We propose the use of counterexample traces and simulation results to provide a judgment and an explanation to the AI engineer as to the reasons actions may be refused or allowed. To demonstrate the practical feasibility of our approach, we evaluate it using the real-world problem of human-UAV (unmanned aerial vehicle) teaming in dynamic and uncertain environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.