Abstract

Mutual trust is a key factor in human-human collaboration. Inspired by this social interaction, we propose to analyze human-agent mutual trust in the collaboration of human and (semi)autonomous multi-agent systems. Human-agent mutual trust should be bidirectional and determines the human's acceptance and use of autonomous agents as well as agents' willingness to take human's command. It is especially important when a human collaborates with multiple agents concurrently. In this paper, we propose time-series human-agent mutual trust models based on well known results from human factors engineering. To avoid both “over-trust” and “under-trust”, we set up dynamic timing models for the multi-agent scheduling problem and develop necessary and sufficient conditions to test the schedulability of the human multi-agent collaborative task. We demonstrate the effectiveness of the proposed scheduling algorithm using Matlab simulations. It shows that the proposed algorithm guarantees the effective real-time scheduling of the human multi-agent collaboration system while ensuring a proper level of mutual trust.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call