Abstract

Trust has been clearly identified as a key concept for human–machine interaction (HMI): on the one hand, users should trust artificial systems; on the other hand, devices must be able to estimate both how much other agents trust them and how trustworthy the other agents are. Indeed, the applications of trust in these scenarios are so complex that often, the interaction models consider only a part of the possible interactions and not the system in its entirety. On the contrary, in this work, we made the effort to consider the different types of interaction together, showing the advantages of this approach and the problems it allows to face. After the theoretical formalization, we introduce an agent simulation to show the functioning of the proposed model. The results of this work provide interesting insights for the evolution of HMI models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call