Abstract

Security is a substantial concept in multi-agent systems where agents dynamically enter and leave the system. Different models of trust have been proposed to assist agents in deciding whether to interact with requesters who are not known (or not very well known) by the service provider. To this end, in this paper we progress our work on security for agent-based systems, which is embedded in service providerpsilas trust evaluation of the counter part. Agents are autonomous software equipped with advanced communication (using public dialogue game-based protocols and private strategies on how to use these protocols) and reasoning capabilities. The service provider agent obtains reports provided by trustworthy agents (regarding to direct interaction histories) and referee agents (in the form of recommendations) and combines a number of measurements, such as number of interactions and timely relevance, to provide an overall estimation of a particular agentpsilas likely behavior. Requesting this agent, called the target agent, to provide the number of interactions it had with each agent, the service provider penalizes the agents who lied about having information for trust evaluation process. In addition, after a periodic time, the actual behavior of the target agent is compared against the information provided by others. This comparison leads to both adjusting the credibility of the contributing agents in trust evaluation and improving the system trust evaluation by minimizing the estimation error. Overall the proposed framework is shown to assist agents effectively perform the trust estimation of interacting agents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call