Abstract

Multiagent system (MAS) is gaining popular attention in various applications especially with the rapid expansion of internet and computing. In general, MAS considers the collaboration of several agents working in the same environment together to achieve common objectives and also plays a critical role in supporting humans for attaining a comfortable life. Multiagent systems are used in numerous applications such as robotics, cloud computing, e-health, military, etc. Under such circumstances, trust among agents is crucial to ensure successful completion of a task especially when the task could only be completed through sharing of information and resources among the agents. This paper presents for the first time a study on the development of trust estimation models as a part of a decision-making framework that empirically computes and evaluates the trustworthiness of agents in MAS where the agent could choose to cooperate and collaborate only with the other trustworthy agents. The novelty of this study lies in the incorporation of beta reputation system into the Markov Games-based temporal difference learning framework and the development of a novel heuristic computation techniques adapted based on averaging methods. These models function as the integral part of the proposed MAS decision-making framework that chooses agents to collaborate with based on trustworthiness. The developed models and the framework are tested for their accuracy, efficiency and effectiveness using several analyses such as the time steps analysis, root-mean-square analysis and interaction analysis. The performances of the proposed models were better than the performances obtained using the trust models reported in the literature. Further real-world experiments are carried out to test the viability of the developed models in real-world applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call