Abstract

We present an approach to generating natural language justifications of decisions derived from norm-based reasoning. Assuming an agent which maximally satisfies a set of rules specified in an object-oriented temporal logic, the user can ask factual questions (about the agent’s rules, actions, and the extent to which the agent violated the rules) as well as “why” questions that require the agent comparing actual behavior to counterfactual trajectories with respect to these rules. To produce natural-sounding explanations, we focus on the subproblem of producing natural language clauses from statements in a fragment of temporal logic, and then describe how to embed these clauses into explanatory sentences. We use a human judgment evaluation on a testbed task to compare our approach to variants in terms of intelligibility, mental model and perceived trust.

Highlights

  • Recent research has enabled artificial agents to work closely with humans, sometimes as team-mates, sometimes as independent decision-makers

  • The main difference between violation enumeration language (VEL) and linear temporal logic (LTL) is that in VEL atomic propositions have been replaced by atomic predicates of the sort found in the relational Markov decision process (RMDP) environment

  • Comparing real and counterfactual violations: When elaborating on the counterfactual explanation for “why φ” by outputting a set of rules violated by the counterfactual trajectory sufficient to exceed the violation cost of the rules violated by the actual trajectory, each such rule is negated and converted into its corresponding VEL clause, each of which are converted into gerund form

Read more

Summary

Introduction

Recent research has enabled artificial agents (such as robots) to work closely with humans, sometimes as team-mates, sometimes as independent decision-makers. Where our approach differs is that rather than aiming to justify logical conclusions via proofs or specify natural language translations of arbitrary logical forms, our approach justifies the decisions of autonomous agents (governed by principles specified in logic) in a way that is understandable to human users More similar in this vein is the work of (Kutlak and van Deemter, 2015), who provide natural language descriptions of the pre- and post-conditions of planner actions. We construct explanations for the behavior of an agent governed by temporal logic rules acting in a deterministic relational Markov decision process (RMDP) , including questions about the agent’s rules and actions and “why” queries requiring a contrastive explanation (Elzein, 2019) appealing to the temporal logic rules (content generation; section 3). We conclude with a summary and discussion of our contributions (section 6)

Test scenario
Agent environment
Calculating and minimizing violation cost
Content Generation
Surface Representation Generation
Translating VEL to natural language
Key assumptions
VEL clause construction pipeline
Processing a conjunction of predicates
Embedding VEL clauses into response templates
Evaluation
Conditions
Measures
Results
Discussion and Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call