Abstract

In this paper, we study human trust and its computational models in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. While the swarm in the MI LOA is controlled by a human operator and an autonomous search algorithm collaboratively, the swarms in the manual and autonomous LOAs are fully directed by the human and the search algorithm, respectively. From user studies, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since the task performance of swarms is not clearly perceivable by humans. Based on the analysis, we formulate trust as a Markov decision process whose state space includes the factors affecting trust. We develop variations of the trust model for different LOAs. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from demonstrations where the learned behaviors are used to predict human trust. Compared to an existing model, our models reduce the prediction error by at most 39.6%, 36.5%, and 28.8% in the manual, MI, and auto-LOA, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.