Abstract
Mutual trust is a key factor in human–human collaboration. Inspired by this social interaction, we analyse human–agent mutual trust in the collaboration of human and (semi)autonomous multi-agent systems. Human–agent mutual trust should be bidirectional and determines the human’s acceptance and hence use of autonomous agents as well as agents’ willingness to take human’s command. It is especially important when a human collaborates with multiple agents concurrently. In this paper, we derive time-series human–agent mutual trust models based on results from human factors engineering. To avoid both ‘over-trust’ and ‘under-trust’, we set up dynamic timing models for the multi-agent scheduling problem and develop necessary and sufficient conditions to test the schedulability of the human multi-agent collaborative task. Our simulation results show that the proposed algorithm guarantees effective real-time scheduling of the human multi-agent collaboration system while ensuring a proper level of human–agent mutual trust.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.