Abstract

Mutual trust is a key factor in human–human collaboration. Inspired by this social interaction, we analyse human–agent mutual trust in the collaboration of human and (semi)autonomous multi-agent systems. Human–agent mutual trust should be bidirectional and determines the human’s acceptance and hence use of autonomous agents as well as agents’ willingness to take human’s command. It is especially important when a human collaborates with multiple agents concurrently. In this paper, we derive time-series human–agent mutual trust models based on results from human factors engineering. To avoid both ‘over-trust’ and ‘under-trust’, we set up dynamic timing models for the multi-agent scheduling problem and develop necessary and sufficient conditions to test the schedulability of the human multi-agent collaborative task. Our simulation results show that the proposed algorithm guarantees effective real-time scheduling of the human multi-agent collaboration system while ensuring a proper level of human–agent mutual trust.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call