Abstract

In open multi-agent systems, agents typically need to rely on others for the provision of information or the delivery of resources. However, since different agents’ capabilities, goals and intentions do not necessarily agree with each other, trust can not be taken for granted in the sense that an agent can not always be expected to be willing and able to perform optimally from a focal agent’s point of view. Instead, the focal agent has to form and update beliefs about other agents’ capabilities and intentions. Many different approaches, models and techniques have been used for this purpose in the past, which generate trust and reputation values. In this paper, employing one particularly popular trust model, we focus on the way an agent may use such trust values in trust-based decision-making about the value of a binary variable.We use computer simulation experiments to assess the relative efficacy of a variety of decision-making methods. In doing so, we argue for systematic analysis of such methods beforehand, so that, based on an investigation of characteristics of different methods, different classes of parameter settings can be distinguished. Whether, on average across many random problem instances, a certain method performs better or worse than alternatives is not the issue, given that the agent using the method always exists in a particular setting. We find that combining trust values using our likelihood method gives performance which is relatively robust to changes in the setting an agent may find herself in.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call