Abstract

The purpose of the paper is to discuss human centered design implications for shared decision making between humans and autonomous systems in complex environments. Design implications are generated based on empirical results from two research paradigms. In the first paradigm, an intelligent agent (Robo Leader) supervised multiple subordinate systems and was in turn supervised by the human operator. The Robo Leader research varied number of subordinate units, task difficulty, agent reliability, type of agent errors, and partial autonomy. The second paradigm involved human interaction with partially and fully autonomous systems. Design implications from both paradigms are evaluated -- relating to multitasking, adaptive systems, false alarms, individual differences, operator trust, and allocation of human and agent tasks for partial autonomy. We conclude that mixed-initiative decision sharing depends on designing interfaces that support human-agent transparency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call