Abstract

With the widespread use of artificial intelligence, understanding the behavior of intelligent agents and robots such as drones is crucial to guarantee successful human-agent interaction, since it is not straightforward for humans to understand an agent's state of mind. Recent empirical studies have confrmed that explaining a system's behavior to human users fosters the latter's acceptance of the system and therefore bring out the importance of explainability. However, providing overwhelming or sometimes unnecessary information can also confuse users and cause failure. For these reasons, this paper proposes a decentralized method to aggregate explanations sent by remote agents to human users according to the user's wishes and needs. To this end, the paper relies on the holonic multi-agent system to hierarchically decompose the environment and enables the aggregation of the explanations. The proposal is tested in a small scenario and outlines explanations at diferent levels of detail from microscopic to macroscopic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call