Abstract
Automated agents, with rapidly increasing capabilities and ease of deployment, will assume more key and decisive roles in our societies. We will encounter and work together with such agents in diverse domains and even in peer roles. To be trusted and for seamless coordination, these agents would be expected and required to explain their decision making, behaviors, and recommendations. We are interested in developing mechanisms that can be used by human-agent teams to maximally leverage relative strengths of human and automated reasoners. We are interested in ad hoc teams in which team members start to collaborate, often to respond to emergencies or short-term opportunities, without significant prior knowledge about each other. In this study, we use virtual ad hoc teams, consisting of a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from available task types. Team members are initially unaware of the capabilities of their partners for the available task types, and the agent task allocator must adapt the allocation process to maximize team performance. It is important in collaborative teams of humans and agents to establish user confidence and satisfaction, as well as to produce effective team performance. Explanations can increase user trust in agent team members and in team decisions. The focus of this paper is on analyzing how explanations of task allocation decisions can influence both user performance and the human workers’ perspective, including factors such as motivation and satisfaction. We evaluate different types of explanation, such as positive, strength-based explanations and negative, weakness-based explanations, to understand (a) how satisfaction and performance are improved when explanations are presented, and (b) how factors such as confidence, understandability, motivation, and explanatory power correlate with satisfaction and performance. We run experiments on the CHATboard platform that allows virtual collaboration over multiple episodes of task assignments, with MTurk workers. We present our analysis of the results and conclusions related to our research hypotheses.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal on Artificial Intelligence Tools
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.